feature article
Subscribe Now

Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)

I sometimes wonder if I spend too much time reading science fiction books. Similarly for watching science fiction films and TV series in general and Doctor Who in particular. The reason I say this is that I do tend to spend more time than is good for me thinking about what I would do if I inadvertently wandered into a timeslip and found myself transported back to the late 1930s or early 1940s, for example. The problems would only be exacerbated if this slip also transported me into a parallel dimension – such as one that never discovered things like Boolean algebra – in which computer science was still firmly rooted in the analog domain.

One of the things I flatter myself I would be good at would be designing a digital computer from the ground up. There are lots of things to wrap one’s brain around here, so – on the off chance anything like this happens to you – I thought I’d provide a few pertinent pointers for you to peruse and ponder.

Let’s assume that the task falls upon you to build the first digital computer on the planet. The first thing you are going to need is a good understanding of the binary number system, including signed and unsigned integers, ones and twos complement values, and a basic grasp of floating-point concepts wouldn’t go amiss. As fate would have it, I was at a friend’s house one evening earlier this week (a few of us get together to watch a couple of episodes of Doctor Who each week), and his university student son asked me some questions about radix complements and diminished radix complements, so I gifted him with a copy of my book, Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), which discusses this in excruciating detail.

Once you’ve decided on your implementation technology (relays, vacuum tubes, transistors) – which largely depends on the time in which you find yourself – a good starting point will be to decide on the fundamental architecture of your central processing unit (CPU). Will it have a single accumulator (ACC), two accumulators, an accumulator and some general-purpose registers, or just the registers? Based on this, the next step will be to decide on a set of machine-level instructions your CPU is going to use and how it will handle them (see also Weird Instructions I Have Loved by Jim Turley). Things like the ability to shift and rotate binary values, to perform logical operations (AND, OR, XOR), to perform mathematical operations (ADD, SUBTRACT), to compare two values to see which is the larger, and to jump to another location in the computer’s memory based on the results from any of these operations.

For the purposes of simplicity and brevity, let’s assume that – in your spare time – you’ve also created some form of read-only memory (ROM) and random-access memory (RAM), along with some form of long-term storage, possibly in the form of perforated paper products like punched cards or paper tapes.

Things really start to get interesting once you’ve actually built your machine, because now you have to program it. The computer itself works at the level of machine code instructions – these are the ones you decided your CPU would implement – each of which is represented by a different pattern of binary 0s and 1s.

So, how are you going to capture and enter your programs? One approach that was used with the first digital computers in our slice of the multi-universe was to (a) specify an address in the computer’s memory in binary using a set of toggle switches, (b) specify an instruction or a piece of data that you wanted to load into the memory, again in binary, using a set of toggle switches, (c) force load this information into the specified memory location, and (d) repeat over and over again for the remaining instructions and data. In addition to taking an inordinate amount of time and being prone to errors, this really wasn’t as much fun as I make it sound.

Your next step would be to define some sort of assembly language, which involves associating mnemonics with each of your instructions, like JMP for “Jump” and “LSHL” for “Logical Shift Left,” for example. Of course, there’s much more to this than simply selecting a set of mnemonics – you also need to describe an associated syntax (what constructs are allowed, how you specify comments, all sorts of things, really).

The interesting thing is that, at this stage, you now have an assembly language, but you don’t actually have anything you can do with it. Well, that’s not strictly true. What you can do is use pencil and paper to capture your programs in your assembly language. Then you hand-assemble the program into its equivalent machine code instructions (the binary patterns of 0s and 1s the computer uses). Then you enter these instructions into the computer using your trusty toggle switches.

This is probably around the time that you will capture, hand assemble, and hand load some simple utility programs that will allow you to do things to make your life easier, like reading your machine code instructions from a paper tape, for example. Along the way, you will also invent some sort of code (like ASCII in our world) that you can use to represent files of human-readable characters like letters and numbers and punctuation marks and suchlike.

What you really need is to be able to capture your programs in human-readable form (that is, in your assembly language) using a simple text editor, and then use a program called an assembler to translate this assembly code into the machine code equivalent that the computer understands. Unfortunately, you don’t have either of these little scamps at the moment.

This is a bit of a chicken-and-egg situation – what comes first, the assembler or the editor? If it were me, I think I’d start by using my pencil and paper to capture the assembly language description of a simple assembler, and then hand assemble this to create the machine code for my first assembler. Next, I’d use my pencil and paper to capture the assembly language description of a simple editor, and then use my rudimentary assembly program to assemble this into the machine code corresponding to my editor.

At this point, we are really cooking on a hot stove, because now we can use our simple text editor to capture the assembly language representation for a more sophisticated assembler, then we can use our rudimentary assembler to assemble our spiffy new assembler, and then we can use our original editor and our spiffy new assembler to create a more sophisticated editor. And around and around the loop we go.

At some stage, believe it or not, the joys of capturing programs in assembly language will start to wane, at which point we will commence to contemplate moving up to a more sophisticated programming language, but what form will this take? I’ll tell you what, I’ll leave you to mull on this for a while, and then we will return to this topic in my next column. In the meantime, as always, I’d love to hear what you think about all of this.

One thought on “Building a Computer, Should You Inadvertently Travel Back in Time (Part 1)”

Leave a Reply

featured blogs
Mar 1, 2024
A menu provides access to frequently used commands or features of an application or program. You can access menu items from the menu bar, typically located at the top of the application window, or a shortcut menu from the right mouse click. There are multiple ways to create c...
Mar 1, 2024
Explore standards development and functional safety requirements with Jyotika Athavale, IEEE senior member and Senior Director of Silicon Lifecycle Management.The post Q&A With Jyotika Athavale, IEEE Champion, on Advancing Standards Development Worldwide appeared first ...
Feb 28, 2024
Would it be better to ride the railways on people-powered rail bikes, or travel to the edge of space in a luxury lounge hoisted by a gigantic balloon?...

featured video

Tackling Challenges in 3DHI Microelectronics for Aerospace, Government, and Defense

Sponsored by Synopsys

Aerospace, Government, and Defense industry experts discuss the complexities of 3DHI for technological, manufacturing, & economic intricacies, as well as security, reliability, and safety challenges & solutions. Explore DARPA’s NGMM plan for the 3DHI R&D ecosystem.

Learn more about Synopsys Aerospace and Government Solutions

featured paper

Reduce 3D IC design complexity with early package assembly verification

Sponsored by Siemens Digital Industries Software

Uncover the unique challenges, along with the latest Calibre verification solutions, for 3D IC design in this new technical paper. As 2.5D and 3D ICs redefine the possibilities of semiconductor design, discover how Siemens is leading the way in verifying complex multi-dimensional systems, while shifting verification left to do so earlier in the design process.

Click here to read more

featured chalk talk

Achieving High Power Density with IGBT and SiC Power Modules
Sponsored by Mouser Electronics and Infineon
Recent trends in the inverter market have made high power density, scalability, and ease of assembly more important than ever before. In this episode of Chalk Talk, Amelia Dalton and Abraham Markose from Infineon examine how Easy & Econo power modules from Infineon can help solve common inverter design requirements. They explore the benefits and construction of these modules and how you can take advantage of them in your next design.
May 19, 2023
32,060 views