I watched the first season of “Ted Lasso” with my family over the Thanksgiving holiday in late November. If you’re one of the few people, like me, who aren’t familiar with the series, it’s based on a 2013 concept ad series by NBC Sports touting the channel’s coverage of the UK’s top-level Premier League football (“soccer” in the US) matches. Jason Sudeikis starred in the ads and later developed the ads’ concept of an American football coach transplanted to the UK to coach a UK football team into a television series that streams on Apple TV+. My brother-in-law and sister-in-law recommended it; we watched it; and it was not what I expected. Not at all.
Here’s why. I was expecting a mindless comedy series about a misplaced American football coach who arrives on the British soccer scene, a stranger in a strange land. Sort of like Mark Twain’s novel, “A Connecticut Yankee in King Arthur’s Court,” which is about a Yankee engineer from Connecticut named Hank Morgan who gets conked on the head and wakes up in England during the reign of King Arthur, but not as well written. Well, “Ted Lasso” is a comedy series and it does revolve around soccer, but it’s not mindless and there’s a fair amount of drama mixed in with the comedy. The characters are also far more complex than I expected. The story lines and complex characters sucked me in.
Many technologies are like that too,
We just celebrated the 50th birthday of the commercial microprocessor, the Intel 4004, last November and will soon celebrate the 50th anniversary of the first 8-bit commercial microprocessor, the Intel 8008. If you formed an opinion of microprocessors at the introduction of these microprocessors in 1971 and 1972, you’d need some mental readjusting if you were suddenly transported fifty years into the future, to 2022, and saw today’s microprocessors. In 1972, microprocessors could barely get out of their own way. Microprocessors are now much more than that. They’re the foundation technology underlying nearly every electronic system on the planet.
In fact, microprocessors have been so successful as a species that they have splintered into myriad subspecialties including CPUs (central processing units), DPUs (data processing units), FPUs (floating-point units), GPUs (graphics processing units), IPUs (infrastructure processing units), MPUs (a common abbreviation for “microprocessor”), NPUs (network, neural, or numeric processing units), QPUs (quantum processing units), RPUs (RAID processing units), SPUs (the Synergistic Processing Unit in the Sony-IBM-Toshiba cell microprocessor), TPUs (tensor processing units), and XPUs (fill-in-the-blank processing units).
We’re clearly in danger of running out of alphabet letters to differentiate our “PU”s. (Somehow, DSPs did not become “SPU”s or “signal processing units.)
The same is true for Microcontrollers
MCUs (microcontrollers) are in the same boat. (To one sort of reader, MCU also means “Marvel Cinematic Universe,” but, fortunately for me, I don’t cover that sort of thing in EEJournal articles.) The first microcontroller I used in a design was back in the 1970s. It was the 8-bit Mostek 3870, a single-chip implementation of the 2-chip Fairchild F8 microprocessor, packaged in a 40-pin DIP. From today’s perspective, the F8 architecture was, to put it mildly, irregular.
The Mostek 3870 microcontroller had a toe-pinching 64 bytes (not Kbytes and certainly not Mbytes) of on-chip, scratchpad RAM. It was an accumulator-based architecture, so there were no data registers other than the single, 8-bit accumulator. You used the 64-byte scratchpad RAM to store every bit of dynamic data in the system between calculations.
The 3870 MCU also had a 2Kbyte on-chip ROM. That’s ROM as in mask-programmable ROM. For software development, you needed an EMU-70, which was a small emulator board containing Mostek’s implementation of the two-chip F8 microprocessor chipset and a pair of sockets for 2708 EPROMs. None of this EEPROM rubbish back then.
The EMU-70 connected to the target system over a 40-pin ribbon cable. Since the 3870 MCU ran at only 4MHz, the ribbon cable sufficed. (Later, Mostek produced the 38P70, which was a Mostek 3870 in a piggyback package. You could plug a 2Kx8 EPROM into the 28 individual pin sockets emerging from the top of the 38P70’s oversized ceramic package.)
The Fairchild F8/Mostek 3870 architecture somehow managed to become popular despite its oddball architecture; at least that’s what the Wikipedia entry says. Possibly, that was because it was almost the only game in town back then. After living with the Mostek 3870’s foibles for a couple of years on a design project that ultimately never made it to full production, I was known to describe the 3870/F8 as the second-worst microcontroller architecture ever developed. That was primarily due to the Mostek 3870’s limited on-chip RAM and oddball bank-addressing scheme for that RAM.
I reserved the “worst” MCU architecture designation for the Intel 8048, which was the other possible, and even less desirable choice I could have made back in 1977. However, Intel quickly recovered from the design errors of the 8048 and produced the derivative 8051 MCU. Now that was a very successful microcontroller architecture with a really long lifespan. Meanwhile, the 3870/F8 architecture quickly faded into blissful oblivion.
Then and now, microcontrollers were and are analogous to Swiss Army knives. Back then, microcontrollers offered lots of different features on one chip (CPU, RAM, ROM, timers, counters, interrupts, and I/O), but none of the features were top-shelf. There just wasn’t room on the die for everything, so the CPU ISAs and architectures were anything but orthogonal. Memory was limited. I/O was simplistic. The chips were chock full of quirks.
For example, the Mostek 3870 could directly address only 16 bytes of RAM at a time. An indirect, scratchpad pointer register selected the accessible bank of 16 RAM locations from the 64-byte, on-chip RAM. A 4-bit counter built into the scratchpad pointer register could be used for auto-indexing, but the counter would wrap around to the beginning of the bank after accessing the 16th location in the bank. That was a great feature for implementing small circular buffers, but it was pretty limited for dealing with larger data objects.
Today’s microcontrollers incorporate 32-bit RISC processors running at clock rates well in excess of 100 MHz, acres of RAM, and tons of Flash EEPROM that permit in-system programming. Yes, MCUs still pinch the toes of really ambitious designers, but today’s MCUs are extremely capable machines that bear little resemblance to the Mostek 3870 or Intel 8048 from the Cretaceous era, with one exception: they’re still stuffed into one IC package. That’s what makes them MCUs.
Consider the STMicroelectronics’s STM32H745xI/G microcontroller, for example. It incorporates:
- A dual-core architecture with an Arm Cortex-M7 with a double-precision FPU running at 480 MHz and a Cortex-M4 running at 240 MHz
- Up to 2 Mbytes of on-chip Flash EEPROM for embedded storage
- Up to 1 Mbyte of RAM
- Three ADCs and two DACs
- 12 16-bit timers
- A slew of standard interfaces including Ethernet
- A ton more stuff
You can do a lot with a chip like that. Other microcontroller vendors like Infineon, Microchip, NXP, and Renesas also offer dual-core (and more-core) microcontrollers, which are light years beyond what was available in the original 8-bit microcontrollers of the mid-1970s. There’s no way you could extrapolate from then to now.
Ditto FPGAs
If you’d formed an opinion about FPGAs when Xilinx introduced the XC2064 “Logic Cell Array” (LCA) on November 1, 1985, you’d have a similarly myopic view of programmable logic. The XC2064 LCA, which eventually came to be known as an FPGA, featured an 8×8 array of logic cells. Each logic cell incorporated a combinatorial logic section built from a RAM-based lookup table (LUT), a storage element (more commonly called a flip-flop), and a signal-routing section. Xilinx claimed that the XC2064’s 64 logic cells were “equivalent” to 1200 logic gates. In addition, the XC2064 had as many as 58 I/O pins, if you ordered it in the right package. (No amount of programmability allowed Xilinx to put 58 I/O pins into a 48-pin DIP.)
Back in 1985, the XC2064 LCA/FPGA was both slow and expensive, which limited its adoption. New technologies often look clunky – just refer to Clayton Christensen’s book titled “The Innovator’s Dilemma.” Over time, the FPGA vendors including Xilinx (now AMD), Altera (now Intel), Lattice, and Actel (now Microchip) started to gobble up more board-level functions. First, their FPGAs assimilated voltage translators and became universal interface chips. This move alone put Texas Instruments and a few more companies out of the level-translator chip business.
Then FPGAs started sprouting DSP slices, which effectively put Texas Instruments and some other DSP vendors out of the DSP business in a relatively short amount of time. (I don’t know if TI was getting a persecution complex at this point, being put out of the logic-level translator and DSP businesses by FPGAs, but I wouldn’t blame the company for feeling somewhat put upon. Now TI is firmly in the analog semiconductor business. Let’s see FPGAs attack TI now!) Finally, FPGAs started offering the fastest SerDes serial ports on the planet. If you’re pushing the envelope on Ethernet speeds to 100, 200, 400, and 800 Gbps, you’re looking at FPGAs.
Today, FPGAs are truly universal chips, albeit really expensive.
Microprocessors. Microcontrollers. FPGAs. In each case, a few decades of evolution has taken each device family in new, surprising, and truly useful directions. They are not what they appeared to be at first glance. Very much like Ted Lasso.