feature article
Subscribe Now

Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)

I sometimes wonder if I spend too much time reading science fiction books. Similarly for watching science fiction films and TV series in general and Doctor Who in particular. The reason I say this is that I do tend to spend more time than is good for me thinking about what I would do if I inadvertently wandered into a timeslip and found myself transported back to the late 1930s or early 1940s, for example. The problems would only be exacerbated if this slip also transported me into a parallel dimension – such as one that never discovered things like Boolean algebra – in which computer science was still firmly rooted in the analog domain.

One of the things I flatter myself I would be good at would be designing a digital computer from the ground up. There are lots of things to wrap one’s brain around here, so – on the off chance anything like this happens to you – I thought I’d provide a few pertinent pointers for you to peruse and ponder.

Let’s assume that the task falls upon you to build the first digital computer on the planet. The first thing you are going to need is a good understanding of the binary number system, including signed and unsigned integers, ones and twos complement values, and a basic grasp of floating-point concepts wouldn’t go amiss. As fate would have it, I was at a friend’s house one evening earlier this week (a few of us get together to watch a couple of episodes of Doctor Who each week), and his university student son asked me some questions about radix complements and diminished radix complements, so I gifted him with a copy of my book, Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), which discusses this in excruciating detail.

Once you’ve decided on your implementation technology (relays, vacuum tubes, transistors) – which largely depends on the time in which you find yourself – a good starting point will be to decide on the fundamental architecture of your central processing unit (CPU). Will it have a single accumulator (ACC), two accumulators, an accumulator and some general-purpose registers, or just the registers? Based on this, the next step will be to decide on a set of machine-level instructions your CPU is going to use and how it will handle them (see also Weird Instructions I Have Loved by Jim Turley). Things like the ability to shift and rotate binary values, to perform logical operations (AND, OR, XOR), to perform mathematical operations (ADD, SUBTRACT), to compare two values to see which is the larger, and to jump to another location in the computer’s memory based on the results from any of these operations.

For the purposes of simplicity and brevity, let’s assume that – in your spare time – you’ve also created some form of read-only memory (ROM) and random-access memory (RAM), along with some form of long-term storage, possibly in the form of perforated paper products like punched cards or paper tapes.

Things really start to get interesting once you’ve actually built your machine, because now you have to program it. The computer itself works at the level of machine code instructions – these are the ones you decided your CPU would implement – each of which is represented by a different pattern of binary 0s and 1s.

So, how are you going to capture and enter your programs? One approach that was used with the first digital computers in our slice of the multi-universe was to (a) specify an address in the computer’s memory in binary using a set of toggle switches, (b) specify an instruction or a piece of data that you wanted to load into the memory, again in binary, using a set of toggle switches, (c) force load this information into the specified memory location, and (d) repeat over and over again for the remaining instructions and data. In addition to taking an inordinate amount of time and being prone to errors, this really wasn’t as much fun as I make it sound.

Your next step would be to define some sort of assembly language, which involves associating mnemonics with each of your instructions, like JMP for “Jump” and “LSHL” for “Logical Shift Left,” for example. Of course, there’s much more to this than simply selecting a set of mnemonics – you also need to describe an associated syntax (what constructs are allowed, how you specify comments, all sorts of things, really).

The interesting thing is that, at this stage, you now have an assembly language, but you don’t actually have anything you can do with it. Well, that’s not strictly true. What you can do is use pencil and paper to capture your programs in your assembly language. Then you hand-assemble the program into its equivalent machine code instructions (the binary patterns of 0s and 1s the computer uses). Then you enter these instructions into the computer using your trusty toggle switches.

This is probably around the time that you will capture, hand assemble, and hand load some simple utility programs that will allow you to do things to make your life easier, like reading your machine code instructions from a paper tape, for example. Along the way, you will also invent some sort of code (like ASCII in our world) that you can use to represent files of human-readable characters like letters and numbers and punctuation marks and suchlike.

What you really need is to be able to capture your programs in human-readable form (that is, in your assembly language) using a simple text editor, and then use a program called an assembler to translate this assembly code into the machine code equivalent that the computer understands. Unfortunately, you don’t have either of these little scamps at the moment.

This is a bit of a chicken-and-egg situation – what comes first, the assembler or the editor? If it were me, I think I’d start by using my pencil and paper to capture the assembly language description of a simple assembler, and then hand assemble this to create the machine code for my first assembler. Next, I’d use my pencil and paper to capture the assembly language description of a simple editor, and then use my rudimentary assembly program to assemble this into the machine code corresponding to my editor.

At this point, we are really cooking on a hot stove, because now we can use our simple text editor to capture the assembly language representation for a more sophisticated assembler, then we can use our rudimentary assembler to assemble our spiffy new assembler, and then we can use our original editor and our spiffy new assembler to create a more sophisticated editor. And around and around the loop we go.

At some stage, believe it or not, the joys of capturing programs in assembly language will start to wane, at which point we will commence to contemplate moving up to a more sophisticated programming language, but what form will this take? I’ll tell you what, I’ll leave you to mull on this for a while, and then we will return to this topic in my next column. In the meantime, as always, I’d love to hear what you think about all of this.

One thought on “Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)”

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

Unlock the Productivity and Efficiency of a Connected Plant
In this episode of Chalk Talk, Amelia Dalton and Patrick Casey from Schneider Electric explore the multitude of benefits that mobility brings to industrial applications. They investigate how Schneider Electric’s Harmony Hub can simplify monitoring and testing, increase operational efficiency and connectivity openness in industrial plants, and how NFC technology can bring new innovation possibilities to IIoT applications.
Apr 23, 2024
13,111 views