feature article
Subscribe Now

Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)

I sometimes wonder if I spend too much time reading science fiction books. Similarly for watching science fiction films and TV series in general and Doctor Who in particular. The reason I say this is that I do tend to spend more time than is good for me thinking about what I would do if I inadvertently wandered into a timeslip and found myself transported back to the late 1930s or early 1940s, for example. The problems would only be exacerbated if this slip also transported me into a parallel dimension – such as one that never discovered things like Boolean algebra – in which computer science was still firmly rooted in the analog domain.

One of the things I flatter myself I would be good at would be designing a digital computer from the ground up. There are lots of things to wrap one’s brain around here, so – on the off chance anything like this happens to you – I thought I’d provide a few pertinent pointers for you to peruse and ponder.

Let’s assume that the task falls upon you to build the first digital computer on the planet. The first thing you are going to need is a good understanding of the binary number system, including signed and unsigned integers, ones and twos complement values, and a basic grasp of floating-point concepts wouldn’t go amiss. As fate would have it, I was at a friend’s house one evening earlier this week (a few of us get together to watch a couple of episodes of Doctor Who each week), and his university student son asked me some questions about radix complements and diminished radix complements, so I gifted him with a copy of my book, Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), which discusses this in excruciating detail.

Once you’ve decided on your implementation technology (relays, vacuum tubes, transistors) – which largely depends on the time in which you find yourself – a good starting point will be to decide on the fundamental architecture of your central processing unit (CPU). Will it have a single accumulator (ACC), two accumulators, an accumulator and some general-purpose registers, or just the registers? Based on this, the next step will be to decide on a set of machine-level instructions your CPU is going to use and how it will handle them (see also Weird Instructions I Have Loved by Jim Turley). Things like the ability to shift and rotate binary values, to perform logical operations (AND, OR, XOR), to perform mathematical operations (ADD, SUBTRACT), to compare two values to see which is the larger, and to jump to another location in the computer’s memory based on the results from any of these operations.

For the purposes of simplicity and brevity, let’s assume that – in your spare time – you’ve also created some form of read-only memory (ROM) and random-access memory (RAM), along with some form of long-term storage, possibly in the form of perforated paper products like punched cards or paper tapes.

Things really start to get interesting once you’ve actually built your machine, because now you have to program it. The computer itself works at the level of machine code instructions – these are the ones you decided your CPU would implement – each of which is represented by a different pattern of binary 0s and 1s.

So, how are you going to capture and enter your programs? One approach that was used with the first digital computers in our slice of the multi-universe was to (a) specify an address in the computer’s memory in binary using a set of toggle switches, (b) specify an instruction or a piece of data that you wanted to load into the memory, again in binary, using a set of toggle switches, (c) force load this information into the specified memory location, and (d) repeat over and over again for the remaining instructions and data. In addition to taking an inordinate amount of time and being prone to errors, this really wasn’t as much fun as I make it sound.

Your next step would be to define some sort of assembly language, which involves associating mnemonics with each of your instructions, like JMP for “Jump” and “LSHL” for “Logical Shift Left,” for example. Of course, there’s much more to this than simply selecting a set of mnemonics – you also need to describe an associated syntax (what constructs are allowed, how you specify comments, all sorts of things, really).

The interesting thing is that, at this stage, you now have an assembly language, but you don’t actually have anything you can do with it. Well, that’s not strictly true. What you can do is use pencil and paper to capture your programs in your assembly language. Then you hand-assemble the program into its equivalent machine code instructions (the binary patterns of 0s and 1s the computer uses). Then you enter these instructions into the computer using your trusty toggle switches.

This is probably around the time that you will capture, hand assemble, and hand load some simple utility programs that will allow you to do things to make your life easier, like reading your machine code instructions from a paper tape, for example. Along the way, you will also invent some sort of code (like ASCII in our world) that you can use to represent files of human-readable characters like letters and numbers and punctuation marks and suchlike.

What you really need is to be able to capture your programs in human-readable form (that is, in your assembly language) using a simple text editor, and then use a program called an assembler to translate this assembly code into the machine code equivalent that the computer understands. Unfortunately, you don’t have either of these little scamps at the moment.

This is a bit of a chicken-and-egg situation – what comes first, the assembler or the editor? If it were me, I think I’d start by using my pencil and paper to capture the assembly language description of a simple assembler, and then hand assemble this to create the machine code for my first assembler. Next, I’d use my pencil and paper to capture the assembly language description of a simple editor, and then use my rudimentary assembly program to assemble this into the machine code corresponding to my editor.

At this point, we are really cooking on a hot stove, because now we can use our simple text editor to capture the assembly language representation for a more sophisticated assembler, then we can use our rudimentary assembler to assemble our spiffy new assembler, and then we can use our original editor and our spiffy new assembler to create a more sophisticated editor. And around and around the loop we go.

At some stage, believe it or not, the joys of capturing programs in assembly language will start to wane, at which point we will commence to contemplate moving up to a more sophisticated programming language, but what form will this take? I’ll tell you what, I’ll leave you to mull on this for a while, and then we will return to this topic in my next column. In the meantime, as always, I’d love to hear what you think about all of this.

One thought on “Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)”

Leave a Reply

featured blogs
Dec 8, 2022
The automotive industry is shifting its focus to reducing CO2 emissions from its vehicles, and it's no surprise, given the current state of the environment. Renault is a world leader in the automotive market, working on internal combustion engines, hybrid engines, electric ve...
Dec 8, 2022
Is the world ready for the next transubstantiation of encabulation?...
Dec 7, 2022
We explore hyperscale datacenters & internet traffic's impact on climate change and discuss how energy-efficient system design shapes a sustainable future. The post How the Electronics Industry Can Shape a More Sustainable, Energy-Efficient World appeared first on From ...
Dec 7, 2022
By Karen Chow When Infineon needed to select a field solver for the development of their next-generation power semiconductor products,… ...

featured video

How to Harness the Massive Amounts of Design Data Generated with Every Project

Sponsored by Cadence Design Systems

Long gone are the days where engineers imported text-based reports into spreadsheets and sorted the columns to extract useful information. Introducing the Cadence Joint Enterprise Data and AI (JedAI) platform created from the ground up for EDA data such as waveforms, workflows, RTL netlists, and more. Using Cadence JedAI, engineering teams can visualize the data and trends and implement practical design strategies across the entire SoC design for improved productivity and quality of results.

Learn More

featured chalk talk

Expanding SiliconMAX SLM to In-Field

Sponsored by Synopsys

In order to keep up with the rigorous pace of today’s electronic designs, we must have visibility into each step of our IC design lifecycle including debug, bring up and in-field operation. In this episode of Chalk Talk, Amelia Dalton chats with Steve Pateras from Synopsys about in-field infrastructure for silicon lifecycle management, the role that edge analytics play when it comes to in-field optimization, and how cloud analytics, runtime agents and SiliconMAX sensor analytics can provide you more information than ever before for the lifecycle of your IC design.

Click here for more information about SiliconMAX Silicon Lifecycle Management