feature article
Subscribe Now

Magnitudes of Mystery

Digital Systems that Nobody Designs

Over the past five-plus decades, Moore’s Law has taken us on a remarkable rocket ride of complexity. With the number of transistors on a chip approximately doubling every two years – and we’re up to 26 of those “doublings” now – we’ve seen an increase on the order of 67 million times the number of transistors on a chip, giving us processors with over seven billion transistors, FPGAs with over thirty billion transistors, and memory devices with over a hundred billion transistors, as of 2016. It’s an absolutely remarkable explosion in complexity. But, during those same fifty years, the average engineering brain’s capacity to manage that complexity has increased by approximately: Zero X.

Yep, we’re now designing devices with tens of billions of transistors using the same old 1965-era brains that were straining to manage fifty. 

We have the EDA industry to thank for this uncanny feat. When our feeble noggins struggled with manually taping out a few hundred transistors, EDA gave us automated schematic capture, gate-level simulation, and automatic layout. When those tools struggled under the weight of progress, EDA stepped it up and gave us RTL-based design, logic synthesis, RTL simulation, and dozens of other technologies that exponentially expanded our ability to get all those flops flipping together in some semblance of order. Today, with tools synthesizing from behavioral high-level languages, formal verification technology, and droves of capable high-level IP blocks, yet another layer of automation has come to our rescue. Our level of design abstraction is ever-rising, and tool capacities and capabilities are ever-improving, enabling us to keep some tenuous grip on the big picture of what we’re engineering.

One less-tangible result of all this progress is that, at a detailed level, modern digital hardware was actually designed by no one. It can’t be. We could spend our entire careers just trying to count the transistors, and we wouldn’t ever finish. Figuring out what they actually do is completely out of the question.

And, the transistor capacity alone does not come close to measuring the increase in the complexity of our systems. Consider that the complexity of the software that runs on those systems has expanded at a rate arguably faster than the transistor count. Here again, our level of design abstraction has risen in an attempt to keep pace with exponentially expanding complexity. Machine language coding gave way to assembly, which gave way to procedural languages, and then ever-higher-level object-oriented languages. Today, with the rapid evolution of artificial neural networks (there has been more progress in AI in the past two years than in all of history before that), software is being created with algorithms that no human has ever actually seen. The software has literally designed itself.

Yep, we now have hyper-complex systems where neither the hardware nor the software – at the detailed level – has been seen or understood by any human.

Sure, our hardware engineers may have stitched together some high-level blocks to create an SoC that integrates data from multiple sensors, passing it along through a sequence of various sorts of processing engines for refinement and interpretation. But the actual logic of those hardware blocks was created by synthesis algorithms designed long ago, verified by massive simulation, emulation, and formal verification technologies, and placed-and-routed by algorithms that can alter logic in order to achieve performance goals. Could most of us experts pick any arbitrary group of transistors in the final chip layout and say for sure what they’re up to, or how and why they’re able to do it? Not likely. 

We must have a DEEP level of trust for the companies who deliver the EDA technology we depend on.

Most EDA technology – and much of the IP that goes with it – comes from three medium-sized companies: Synopsys, Cadence, and Mentor. The software sold by these three organizations designs and verifies most of the chips made on earth. Throw in IP from ARM, and you account for most of the actual “intelligence” on the hardware side of just about any system in the world. If you design with FPGAs, you’re in the hands of tool developers from Xilinx and Intel/Altera. The VERY small community of engineers who develop the key algorithms at these companies periodically float from one of these to the other as career situations change, creating a small but loose community of technologists who, arguably, control how every high-tech device in the world actually works. There is no separation of powers here, no checks and balances. The technology that verifies that our circuits will do what we want comes from the same companies who created the software that designed them in the first place. 

This should be a sobering thought for anyone who uses a computer, a cell phone, the internet, the cloud, an automobile, train, airplane, or medical device.

Now, before anyone jumps to the conclusion that I’m creating wild conspiracy theories here, let me say for the record that I’ve personally worked directly with EDA engineers for the past several decades. And I firmly believe there are no nefarious global conspiracies at work.

But consider the dark possibilities for a moment. Software that creates logic circuits with tens of millions of logic gates would have little trouble burying hardware back doors or easter eggs into the mix. The complexity of the EDA software tools themselves are such that nobody would be likely to notice. These routines could even lie dormant for decades, waiting for the right design circumstances to come along before injecting their additional logic. Wouldn’t the bogus circuitry be caught during verification? Oh, you mean using the verification software tools that came from these very same three companies?

And, speaking of verification, even barring any collusion between implementation and verification code, our ability to verify our systems centers on two key questions: “Does the system do the things we want/expect?” and “Does the system fail to do things that we specifically asserted it should not?” Left out of this equation is, “Does the system do something I did not intend, and never even considered?” Our verification process is unlikely to answer this question. 

And, the likelihood of human engineers finding a few dozen gates out of place in a design that consists of tens of millions of gates would make finding needles in haystacks seem like a breeze. 

Wouldn’t somebody at an EDA company notice the offending code? Again, several million lines of highly complex software make up these EDA tools. And most of them use software IP sourced from a wide variety of third parties – parsers from here, logic minimization algorithms from there, scripting language support from somewhere else, graphics rendering from yet another place, various analysis packages from universities, open source communities, and long-forgotten acquired software companies… All of this has been developed and has evolved over the course of several decades. Today, I’d argue that nobody completely understands how any complex EDA tool actually works.

Removing our tinfoil hats for a moment, the best case we can make for the non-existence of such evildoings is maybe a parallel to Stephen Hawking’s quote: “If time travel is possible, where are all the tourists from the future?” If EDA engineers have been quietly staging a takeover of the world for the last three decades, what the heck are they waiting for?

As we move headlong toward civilization-changing technologies such as autonomous vehicles controlled by massive computing systems running AI algorithms on input from advanced sensors, raging battles between security technology and cyber weapons, and real-world weaponry under the control of automated networked systems, it is worth pausing as engineers to ask ourselves – what do we ACTUALLY know about the incomprehensibly complex systems we are creating? How much do we take on faith? How much can we actually verify? These are not simple questions. 

2 thoughts on “Magnitudes of Mystery”

  1. I should hope that my brain capacity has NOT “increased by Zero X”.
    I would be satisfied if it had increased by Zero % or changed by 1.0 times.

    (And don’t get me started on expressions like “10 times smaller”!)

    I suspect that I have actually lost a few neurons since 1965, though 50 years experience should have given me many more synapse connections.

  2. @dmlee: I’m gonna argue that increasing by zero X works mathematically. If capacity was represented by C, then increasing by zero X would be C+0*C=C

    Actually, I claim “10 times smaller” works too, it just doesn’t mean what most people who use it intend:
    C-10*C=-9C

    I think most marketers who use it meant to say “90% smaller”
    C-0.9C=0.1C

    🙂
    Kevin

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured chalk talk

Datalogging in Automotive
Sponsored by Infineon
In this episode of Chalk Talk, Amelia Dalton and Harsha Medu from Infineon examine the value of data logging in automotive applications. They also explore the benefits of event data recorders and how these technologies will shape the future of automotive travel.
Jan 2, 2024
63,706 views