feature article
Subscribe Now

Magnitudes of Mystery

Digital Systems that Nobody Designs

Over the past five-plus decades, Moore’s Law has taken us on a remarkable rocket ride of complexity. With the number of transistors on a chip approximately doubling every two years – and we’re up to 26 of those “doublings” now – we’ve seen an increase on the order of 67 million times the number of transistors on a chip, giving us processors with over seven billion transistors, FPGAs with over thirty billion transistors, and memory devices with over a hundred billion transistors, as of 2016. It’s an absolutely remarkable explosion in complexity. But, during those same fifty years, the average engineering brain’s capacity to manage that complexity has increased by approximately: Zero X.

Yep, we’re now designing devices with tens of billions of transistors using the same old 1965-era brains that were straining to manage fifty. 

We have the EDA industry to thank for this uncanny feat. When our feeble noggins struggled with manually taping out a few hundred transistors, EDA gave us automated schematic capture, gate-level simulation, and automatic layout. When those tools struggled under the weight of progress, EDA stepped it up and gave us RTL-based design, logic synthesis, RTL simulation, and dozens of other technologies that exponentially expanded our ability to get all those flops flipping together in some semblance of order. Today, with tools synthesizing from behavioral high-level languages, formal verification technology, and droves of capable high-level IP blocks, yet another layer of automation has come to our rescue. Our level of design abstraction is ever-rising, and tool capacities and capabilities are ever-improving, enabling us to keep some tenuous grip on the big picture of what we’re engineering.

One less-tangible result of all this progress is that, at a detailed level, modern digital hardware was actually designed by no one. It can’t be. We could spend our entire careers just trying to count the transistors, and we wouldn’t ever finish. Figuring out what they actually do is completely out of the question.

And, the transistor capacity alone does not come close to measuring the increase in the complexity of our systems. Consider that the complexity of the software that runs on those systems has expanded at a rate arguably faster than the transistor count. Here again, our level of design abstraction has risen in an attempt to keep pace with exponentially expanding complexity. Machine language coding gave way to assembly, which gave way to procedural languages, and then ever-higher-level object-oriented languages. Today, with the rapid evolution of artificial neural networks (there has been more progress in AI in the past two years than in all of history before that), software is being created with algorithms that no human has ever actually seen. The software has literally designed itself.

Yep, we now have hyper-complex systems where neither the hardware nor the software – at the detailed level – has been seen or understood by any human.

Sure, our hardware engineers may have stitched together some high-level blocks to create an SoC that integrates data from multiple sensors, passing it along through a sequence of various sorts of processing engines for refinement and interpretation. But the actual logic of those hardware blocks was created by synthesis algorithms designed long ago, verified by massive simulation, emulation, and formal verification technologies, and placed-and-routed by algorithms that can alter logic in order to achieve performance goals. Could most of us experts pick any arbitrary group of transistors in the final chip layout and say for sure what they’re up to, or how and why they’re able to do it? Not likely. 

We must have a DEEP level of trust for the companies who deliver the EDA technology we depend on.

Most EDA technology – and much of the IP that goes with it – comes from three medium-sized companies: Synopsys, Cadence, and Mentor. The software sold by these three organizations designs and verifies most of the chips made on earth. Throw in IP from ARM, and you account for most of the actual “intelligence” on the hardware side of just about any system in the world. If you design with FPGAs, you’re in the hands of tool developers from Xilinx and Intel/Altera. The VERY small community of engineers who develop the key algorithms at these companies periodically float from one of these to the other as career situations change, creating a small but loose community of technologists who, arguably, control how every high-tech device in the world actually works. There is no separation of powers here, no checks and balances. The technology that verifies that our circuits will do what we want comes from the same companies who created the software that designed them in the first place. 

This should be a sobering thought for anyone who uses a computer, a cell phone, the internet, the cloud, an automobile, train, airplane, or medical device.

Now, before anyone jumps to the conclusion that I’m creating wild conspiracy theories here, let me say for the record that I’ve personally worked directly with EDA engineers for the past several decades. And I firmly believe there are no nefarious global conspiracies at work.

But consider the dark possibilities for a moment. Software that creates logic circuits with tens of millions of logic gates would have little trouble burying hardware back doors or easter eggs into the mix. The complexity of the EDA software tools themselves are such that nobody would be likely to notice. These routines could even lie dormant for decades, waiting for the right design circumstances to come along before injecting their additional logic. Wouldn’t the bogus circuitry be caught during verification? Oh, you mean using the verification software tools that came from these very same three companies?

And, speaking of verification, even barring any collusion between implementation and verification code, our ability to verify our systems centers on two key questions: “Does the system do the things we want/expect?” and “Does the system fail to do things that we specifically asserted it should not?” Left out of this equation is, “Does the system do something I did not intend, and never even considered?” Our verification process is unlikely to answer this question. 

And, the likelihood of human engineers finding a few dozen gates out of place in a design that consists of tens of millions of gates would make finding needles in haystacks seem like a breeze. 

Wouldn’t somebody at an EDA company notice the offending code? Again, several million lines of highly complex software make up these EDA tools. And most of them use software IP sourced from a wide variety of third parties – parsers from here, logic minimization algorithms from there, scripting language support from somewhere else, graphics rendering from yet another place, various analysis packages from universities, open source communities, and long-forgotten acquired software companies… All of this has been developed and has evolved over the course of several decades. Today, I’d argue that nobody completely understands how any complex EDA tool actually works.

Removing our tinfoil hats for a moment, the best case we can make for the non-existence of such evildoings is maybe a parallel to Stephen Hawking’s quote: “If time travel is possible, where are all the tourists from the future?” If EDA engineers have been quietly staging a takeover of the world for the last three decades, what the heck are they waiting for?

As we move headlong toward civilization-changing technologies such as autonomous vehicles controlled by massive computing systems running AI algorithms on input from advanced sensors, raging battles between security technology and cyber weapons, and real-world weaponry under the control of automated networked systems, it is worth pausing as engineers to ask ourselves – what do we ACTUALLY know about the incomprehensibly complex systems we are creating? How much do we take on faith? How much can we actually verify? These are not simple questions. 

2 thoughts on “Magnitudes of Mystery”

  1. I should hope that my brain capacity has NOT “increased by Zero X”.
    I would be satisfied if it had increased by Zero % or changed by 1.0 times.

    (And don’t get me started on expressions like “10 times smaller”!)

    I suspect that I have actually lost a few neurons since 1965, though 50 years experience should have given me many more synapse connections.

  2. @dmlee: I’m gonna argue that increasing by zero X works mathematically. If capacity was represented by C, then increasing by zero X would be C+0*C=C

    Actually, I claim “10 times smaller” works too, it just doesn’t mean what most people who use it intend:
    C-10*C=-9C

    I think most marketers who use it meant to say “90% smaller”
    C-0.9C=0.1C

    🙂
    Kevin

Leave a Reply

featured blogs
Dec 1, 2023
Why is Design for Testability (DFT) crucial for VLSI (Very Large Scale Integration) design? Keeping testability in mind when developing a chip makes it simpler to find structural flaws in the chip and make necessary design corrections before the product is shipped to users. T...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

EV Charging: Understanding the Basics
Sponsored by Mouser Electronics and Bel
Have you ever considered what the widespread adoption of electric vehicles will look like? What infrastructure requirements will need to be met? In this episode of Chalk Talk, I chat about all of this and more with Bruce Rose from Bel. We review the basics of EV charging, investigate the charging requirements for both AC and DC chargers, and examine the role that on-board inverters play in electric vehicle charging.
Mar 27, 2023
30,367 views