feature article
Subscribe Now

Looping the Law

Feedback Drives Design Evolution

Moore’s Law tells us that we should be able to double the number of transistors on a chip every couple of years. And, for about five decades, that has held mostly true. There are corollaries to Moore’s Law (that we have conveniently retrofitted as the years have passed) that say we should get some proportional increase in speed and improvement in power consumption as well. But, still considering all that, Moore’s Law is mainly about lithography – we can print things smaller and smaller on silicon, and we get lots of great benefits when we do.

Nothing in Moore’s Law says we’ll be able to do anything useful with all those transistors, however. It’s up to us, as engineers, to figure out how to take advantage of the bounty that Moore’s Law is giving us. At first, it was pretty easy. It didn’t take a lot of imagination or design savvy to get a few dozen, a few hundred, or even a few thousand transistors to work in concert doing something useful and interesting. As we got into the tens and hundreds of thousands, millions, and billions, however, engineering productivity became a serious problem – a serious problem, which, of course, gave birth to the EDA industry.

I once joked that every EDA presentation in history had the same first three slides. First, a slide showing Moore’s Law swooping up and to the right giving us exponentially more transistors to work with as time passed. Second, one with a shallower sloped line showing how many transistors a talented engineer could manage to design-in. “The problem,” these PowerPoint prophets would say, “is the growing gap between the transistors we have and the transistors we can productively use.” The third slide was the payoff – “Thanks to our new tool,” – the EDA marketers would pause and smile at this point – “you’ll be able to close that gap and keep pace with Moore’s Law.”

It didn’t really matter what the tool was at that point – faster HDL simulation, better debug, faster and more powerful synthesis, better/faster place-and-route, higher-level design languages – any new EDA tool or methodology could be promoted based on closing the ominous “design gap.”  Being software, however, one thing that all these EDA tools needed in order to keep pace was processing power – and lots of it.

In the 1990s, a concept that was bantered about frequently was the idea of “white space.” White space was a consequence of the design gap. Since engineering productivity was projected to grow more slowly than the growth in available transistors, we would likely reach a point where we have a surplus of transistors on our chips. This surplus was called the “white space.” What does one do with white space? An early answer was “programmability.” By creating programmable logic cells (which used about 10x the transistors for the same logic function as “hard” logic) we traded white space for the privilege of programmability.

In my first job in 1983 (Oh no! Here comes one of those old-engineer-and-the-sea stories… Yes, it is – and you kids get off my lawn!) Ahem, in my first job in 1983, we were running place-and-route on gate arrays with 10,000 gates. A single run on our VAX 11/760 took on the order of 24 hours. Today, Xilinx FPGAs range up to two million logic elements – so we have to place and route something like 200 times as many objects, even after factoring in the penalty of programmability. If you consider that place-and-route is something like an n-squared compute problem, that means we’d have to do somewhere in the neighborhood of 40,000 times the computation for a place-and-route run today compared to thirty years ago. Yep, on my 1983 VAX 11/760 that would push my 1-day runtimes up to about 100 years!

Luckily, computers and place-and-route algorithms have improved quite a bit over those three decades, so we’re not waiting 100 years for our V2000T runs in Vivado. This, in fact, is where the feedback loop comes in. The chips that we designed with those old VAXes enabled faster computation for the tools that designed the next generation, and so on. Our EDA computing performance had to evolve as fast or faster than the Moore’s Law curve just to keep pace. If we were designing anything more sophisticated than large memories, a plateau in the pace of compute performance would bring the practical use of Moore’s Law to a screeching halt.

About a decade ago, the quest for more processing power took an important conceptual turn. Instead of continuing to chase clock frequencies, we began to pursue parallelism. As a result, today, rather than 8GHz monolithic processors, we have quad-core 3GHz chips. For many types of applications, this shift was benign. Two processors could go just about twice as fast as one. For many EDA algorithms, however, this was definitely not the case. Highly complex algorithms like logic synthesis or place-and-route were not easily broken down into independent, parallelizable chunks. They were performance- and capacity-limited by the speed of individual cores, and those cores weren’t getting faster at the usual pace. We were in serious danger of hitting the productivity plateau.

Luckily, EDA was working quietly in the background to leap over this barrier. Synopsys, for example, built a capability called “Automatic Compile Points” into their Synplify FPGA synthesis software. It could automatically partition designs into chunks that could then be synthesized in parallel on multiple processors. This allows synthesis to take advantage of parallel computing and allows us to leap over the barrier caused by slow improvement in monolithic processor performance. Similarly, many other EDA tools re-vamped themselves to adapt to this newly-popular parallel computing infrastructure – with varying levels of success. For most EDA tools, this kind of change alters the fundamental architecture of the algorithm, requiring substantial rework of existing technologies.

Beyond simple compute performance, our engineering productivity has grown in other ways. Our level of design abstraction has increased dramatically – from transistors and gates plopped down on schematics in the 80s to gate- and register-level abstractions in the 90s to high-level IP blocks and algorithmic descriptions today. Without these kinds of design super-power tools and techniques, we simply would not be able to use all the cool stuff that Moore’s Law gives us. Short of simply filling up our SoC designs with more and more memory (for which our appetite has yet to wane), we need to continue to ramp our design capabilities at a substantial rate. That means (in part) not falling too deeply in love with the way we do any part of the design process today, as that process will need to change in order for us to keep up with the pace of progress. 

If we ever pause, the back of this loop will come around and hit us in the head. The fuel that drives Moore’s Law is the money derived from the productive use of what it does for us. That use depends on the capabilities of engineers and their tools to create new and better things out of the raw materials that Moore’s Law gives us – and to design the computing machines that will enable the engineers of the future to outpace our own achievements. Next time you’re worried about whether the end of the exponential era will be caused by the failure of extreme UV, or the difficulties of triple-patterning, you should take a look in the mirror. The real problem might be closer to home than you think.

One thought on “Looping the Law”

  1. Designing bigger chips requires building faster computers which requires bigger chips. Will our engineering productivity continue to outpace Moore’s Law?

Leave a Reply

featured blogs
Jun 2, 2023
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....
Jun 2, 2023
Explore the importance of big data analytics in the semiconductor manufacturing process, as chip designers pull insights from throughout the silicon lifecycle. The post Demanding Chip Complexity and Manufacturing Requirements Call for Data Analytics appeared first on New Hor...

featured video

Synopsys Solution for Comprehensive Low Power Verification

Sponsored by Synopsys

The growing complexity of power management in chips requires a holistic approach to UPF power-intent generation and low power verification. Learn how Synopsys addresses these requirements with a comprehensive solution for low-power verification.

Learn more about Synopsys’ Energy-Efficient SoCs Solutions

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

Gate Driving Your Problems Away
Sponsored by Mouser Electronics and Infineon
Isolated gate drivers are a crucial design element that can protect our designs from over-voltage and short circuits. But how can we fine tune these isolated gate drivers to match the design requirements we need? In this episode of Chalk Talk, Amelia Dalton and Perry Rothenbaum from Infineon explore the programmable features included in the EiceDRIVER™ X3 single-channel highly flexible isolated gate drivers from Infineon. They also examine why their reliable and accurate protection, precise and fast on and off switching and DESAT protection can make them a great fit for your next design.
Jul 25, 2022
34,632 views