feature article
Subscribe Now

Magnitudes of Mystery

Digital Systems that Nobody Designs

Over the past five-plus decades, Moore’s Law has taken us on a remarkable rocket ride of complexity. With the number of transistors on a chip approximately doubling every two years – and we’re up to 26 of those “doublings” now – we’ve seen an increase on the order of 67 million times the number of transistors on a chip, giving us processors with over seven billion transistors, FPGAs with over thirty billion transistors, and memory devices with over a hundred billion transistors, as of 2016. It’s an absolutely remarkable explosion in complexity. But, during those same fifty years, the average engineering brain’s capacity to manage that complexity has increased by approximately: Zero X.

Yep, we’re now designing devices with tens of billions of transistors using the same old 1965-era brains that were straining to manage fifty. 

We have the EDA industry to thank for this uncanny feat. When our feeble noggins struggled with manually taping out a few hundred transistors, EDA gave us automated schematic capture, gate-level simulation, and automatic layout. When those tools struggled under the weight of progress, EDA stepped it up and gave us RTL-based design, logic synthesis, RTL simulation, and dozens of other technologies that exponentially expanded our ability to get all those flops flipping together in some semblance of order. Today, with tools synthesizing from behavioral high-level languages, formal verification technology, and droves of capable high-level IP blocks, yet another layer of automation has come to our rescue. Our level of design abstraction is ever-rising, and tool capacities and capabilities are ever-improving, enabling us to keep some tenuous grip on the big picture of what we’re engineering.

One less-tangible result of all this progress is that, at a detailed level, modern digital hardware was actually designed by no one. It can’t be. We could spend our entire careers just trying to count the transistors, and we wouldn’t ever finish. Figuring out what they actually do is completely out of the question.

And, the transistor capacity alone does not come close to measuring the increase in the complexity of our systems. Consider that the complexity of the software that runs on those systems has expanded at a rate arguably faster than the transistor count. Here again, our level of design abstraction has risen in an attempt to keep pace with exponentially expanding complexity. Machine language coding gave way to assembly, which gave way to procedural languages, and then ever-higher-level object-oriented languages. Today, with the rapid evolution of artificial neural networks (there has been more progress in AI in the past two years than in all of history before that), software is being created with algorithms that no human has ever actually seen. The software has literally designed itself.

Yep, we now have hyper-complex systems where neither the hardware nor the software – at the detailed level – has been seen or understood by any human.

Sure, our hardware engineers may have stitched together some high-level blocks to create an SoC that integrates data from multiple sensors, passing it along through a sequence of various sorts of processing engines for refinement and interpretation. But the actual logic of those hardware blocks was created by synthesis algorithms designed long ago, verified by massive simulation, emulation, and formal verification technologies, and placed-and-routed by algorithms that can alter logic in order to achieve performance goals. Could most of us experts pick any arbitrary group of transistors in the final chip layout and say for sure what they’re up to, or how and why they’re able to do it? Not likely. 

We must have a DEEP level of trust for the companies who deliver the EDA technology we depend on.

Most EDA technology – and much of the IP that goes with it – comes from three medium-sized companies: Synopsys, Cadence, and Mentor. The software sold by these three organizations designs and verifies most of the chips made on earth. Throw in IP from ARM, and you account for most of the actual “intelligence” on the hardware side of just about any system in the world. If you design with FPGAs, you’re in the hands of tool developers from Xilinx and Intel/Altera. The VERY small community of engineers who develop the key algorithms at these companies periodically float from one of these to the other as career situations change, creating a small but loose community of technologists who, arguably, control how every high-tech device in the world actually works. There is no separation of powers here, no checks and balances. The technology that verifies that our circuits will do what we want comes from the same companies who created the software that designed them in the first place. 

This should be a sobering thought for anyone who uses a computer, a cell phone, the internet, the cloud, an automobile, train, airplane, or medical device.

Now, before anyone jumps to the conclusion that I’m creating wild conspiracy theories here, let me say for the record that I’ve personally worked directly with EDA engineers for the past several decades. And I firmly believe there are no nefarious global conspiracies at work.

But consider the dark possibilities for a moment. Software that creates logic circuits with tens of millions of logic gates would have little trouble burying hardware back doors or easter eggs into the mix. The complexity of the EDA software tools themselves are such that nobody would be likely to notice. These routines could even lie dormant for decades, waiting for the right design circumstances to come along before injecting their additional logic. Wouldn’t the bogus circuitry be caught during verification? Oh, you mean using the verification software tools that came from these very same three companies?

And, speaking of verification, even barring any collusion between implementation and verification code, our ability to verify our systems centers on two key questions: “Does the system do the things we want/expect?” and “Does the system fail to do things that we specifically asserted it should not?” Left out of this equation is, “Does the system do something I did not intend, and never even considered?” Our verification process is unlikely to answer this question. 

And, the likelihood of human engineers finding a few dozen gates out of place in a design that consists of tens of millions of gates would make finding needles in haystacks seem like a breeze. 

Wouldn’t somebody at an EDA company notice the offending code? Again, several million lines of highly complex software make up these EDA tools. And most of them use software IP sourced from a wide variety of third parties – parsers from here, logic minimization algorithms from there, scripting language support from somewhere else, graphics rendering from yet another place, various analysis packages from universities, open source communities, and long-forgotten acquired software companies… All of this has been developed and has evolved over the course of several decades. Today, I’d argue that nobody completely understands how any complex EDA tool actually works.

Removing our tinfoil hats for a moment, the best case we can make for the non-existence of such evildoings is maybe a parallel to Stephen Hawking’s quote: “If time travel is possible, where are all the tourists from the future?” If EDA engineers have been quietly staging a takeover of the world for the last three decades, what the heck are they waiting for?

As we move headlong toward civilization-changing technologies such as autonomous vehicles controlled by massive computing systems running AI algorithms on input from advanced sensors, raging battles between security technology and cyber weapons, and real-world weaponry under the control of automated networked systems, it is worth pausing as engineers to ask ourselves – what do we ACTUALLY know about the incomprehensibly complex systems we are creating? How much do we take on faith? How much can we actually verify? These are not simple questions. 

2 thoughts on “Magnitudes of Mystery”

  1. I should hope that my brain capacity has NOT “increased by Zero X”.
    I would be satisfied if it had increased by Zero % or changed by 1.0 times.

    (And don’t get me started on expressions like “10 times smaller”!)

    I suspect that I have actually lost a few neurons since 1965, though 50 years experience should have given me many more synapse connections.

  2. @dmlee: I’m gonna argue that increasing by zero X works mathematically. If capacity was represented by C, then increasing by zero X would be C+0*C=C

    Actually, I claim “10 times smaller” works too, it just doesn’t mean what most people who use it intend:

    I think most marketers who use it meant to say “90% smaller”


Leave a Reply

featured blogs
May 22, 2020
As small as a postage stamp, the Seeeduino XIAO boasts a 32-bit Arm Cortex-M0+ processor running at 48 MHz with 256 KB of flash memory and 32 KB of SRAM....
May 22, 2020
Movies have the BAFTAs and Academy Awards. Music has the GRAMMYs. Broadway has the Tonys. Global footballers have the Ballon d’Or. SI/PI engineers have the DesignCon 2020 Best Paper Award? Really? What’s that? Many people are familiar with annual awards mentioned....
May 22, 2020
[From the last episode: We looked at the complexities of cache in a multicore processor.] OK, time for a breather and for some review. We'€™ve taken quite the tour of computing, both in an IoT device (or even a laptop) and in the cloud. Here are some basic things we looked ...
May 21, 2020
In this time of financial uncertainty, a yield-shield portfolio can protect your investments from market volatility.  Uncertainty can be defined as a quantitative measurement of variability in the data [1]. The more the variability in the data – whet...

Featured Video

Featured Paper

Actuator Design Trends for Functional Safety Systems in Electric and Autonomous Vehicles

Sponsored by Texas Instruments

Four automotive trends – connected, autonomous, shared and electric, also known as CASE – are driving some of the most exciting automotive developments right now. This white paper focuses on the impact of CASE trends on electrically actuated systems and how designers can meet the needs of evolving functional safety systems.

Click here to download the whitepaper