feature article
Subscribe Now

Looping the Law

Feedback Drives Design Evolution

Moore’s Law tells us that we should be able to double the number of transistors on a chip every couple of years. And, for about five decades, that has held mostly true. There are corollaries to Moore’s Law (that we have conveniently retrofitted as the years have passed) that say we should get some proportional increase in speed and improvement in power consumption as well. But, still considering all that, Moore’s Law is mainly about lithography – we can print things smaller and smaller on silicon, and we get lots of great benefits when we do.

Nothing in Moore’s Law says we’ll be able to do anything useful with all those transistors, however. It’s up to us, as engineers, to figure out how to take advantage of the bounty that Moore’s Law is giving us. At first, it was pretty easy. It didn’t take a lot of imagination or design savvy to get a few dozen, a few hundred, or even a few thousand transistors to work in concert doing something useful and interesting. As we got into the tens and hundreds of thousands, millions, and billions, however, engineering productivity became a serious problem – a serious problem, which, of course, gave birth to the EDA industry.

I once joked that every EDA presentation in history had the same first three slides. First, a slide showing Moore’s Law swooping up and to the right giving us exponentially more transistors to work with as time passed. Second, one with a shallower sloped line showing how many transistors a talented engineer could manage to design-in. “The problem,” these PowerPoint prophets would say, “is the growing gap between the transistors we have and the transistors we can productively use.” The third slide was the payoff – “Thanks to our new tool,” – the EDA marketers would pause and smile at this point – “you’ll be able to close that gap and keep pace with Moore’s Law.”

It didn’t really matter what the tool was at that point – faster HDL simulation, better debug, faster and more powerful synthesis, better/faster place-and-route, higher-level design languages – any new EDA tool or methodology could be promoted based on closing the ominous “design gap.”  Being software, however, one thing that all these EDA tools needed in order to keep pace was processing power – and lots of it.

In the 1990s, a concept that was bantered about frequently was the idea of “white space.” White space was a consequence of the design gap. Since engineering productivity was projected to grow more slowly than the growth in available transistors, we would likely reach a point where we have a surplus of transistors on our chips. This surplus was called the “white space.” What does one do with white space? An early answer was “programmability.” By creating programmable logic cells (which used about 10x the transistors for the same logic function as “hard” logic) we traded white space for the privilege of programmability.

In my first job in 1983 (Oh no! Here comes one of those old-engineer-and-the-sea stories… Yes, it is – and you kids get off my lawn!) Ahem, in my first job in 1983, we were running place-and-route on gate arrays with 10,000 gates. A single run on our VAX 11/760 took on the order of 24 hours. Today, Xilinx FPGAs range up to two million logic elements – so we have to place and route something like 200 times as many objects, even after factoring in the penalty of programmability. If you consider that place-and-route is something like an n-squared compute problem, that means we’d have to do somewhere in the neighborhood of 40,000 times the computation for a place-and-route run today compared to thirty years ago. Yep, on my 1983 VAX 11/760 that would push my 1-day runtimes up to about 100 years!

Luckily, computers and place-and-route algorithms have improved quite a bit over those three decades, so we’re not waiting 100 years for our V2000T runs in Vivado. This, in fact, is where the feedback loop comes in. The chips that we designed with those old VAXes enabled faster computation for the tools that designed the next generation, and so on. Our EDA computing performance had to evolve as fast or faster than the Moore’s Law curve just to keep pace. If we were designing anything more sophisticated than large memories, a plateau in the pace of compute performance would bring the practical use of Moore’s Law to a screeching halt.

About a decade ago, the quest for more processing power took an important conceptual turn. Instead of continuing to chase clock frequencies, we began to pursue parallelism. As a result, today, rather than 8GHz monolithic processors, we have quad-core 3GHz chips. For many types of applications, this shift was benign. Two processors could go just about twice as fast as one. For many EDA algorithms, however, this was definitely not the case. Highly complex algorithms like logic synthesis or place-and-route were not easily broken down into independent, parallelizable chunks. They were performance- and capacity-limited by the speed of individual cores, and those cores weren’t getting faster at the usual pace. We were in serious danger of hitting the productivity plateau.

Luckily, EDA was working quietly in the background to leap over this barrier. Synopsys, for example, built a capability called “Automatic Compile Points” into their Synplify FPGA synthesis software. It could automatically partition designs into chunks that could then be synthesized in parallel on multiple processors. This allows synthesis to take advantage of parallel computing and allows us to leap over the barrier caused by slow improvement in monolithic processor performance. Similarly, many other EDA tools re-vamped themselves to adapt to this newly-popular parallel computing infrastructure – with varying levels of success. For most EDA tools, this kind of change alters the fundamental architecture of the algorithm, requiring substantial rework of existing technologies.

Beyond simple compute performance, our engineering productivity has grown in other ways. Our level of design abstraction has increased dramatically – from transistors and gates plopped down on schematics in the 80s to gate- and register-level abstractions in the 90s to high-level IP blocks and algorithmic descriptions today. Without these kinds of design super-power tools and techniques, we simply would not be able to use all the cool stuff that Moore’s Law gives us. Short of simply filling up our SoC designs with more and more memory (for which our appetite has yet to wane), we need to continue to ramp our design capabilities at a substantial rate. That means (in part) not falling too deeply in love with the way we do any part of the design process today, as that process will need to change in order for us to keep up with the pace of progress. 

If we ever pause, the back of this loop will come around and hit us in the head. The fuel that drives Moore’s Law is the money derived from the productive use of what it does for us. That use depends on the capabilities of engineers and their tools to create new and better things out of the raw materials that Moore’s Law gives us – and to design the computing machines that will enable the engineers of the future to outpace our own achievements. Next time you’re worried about whether the end of the exponential era will be caused by the failure of extreme UV, or the difficulties of triple-patterning, you should take a look in the mirror. The real problem might be closer to home than you think.

One thought on “Looping the Law”

  1. Designing bigger chips requires building faster computers which requires bigger chips. Will our engineering productivity continue to outpace Moore’s Law?

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

High-Voltage Isolation for Robust and Reliable System Operation
In this episode of Chalk Talk, Amelia Dalton and Luke Trowbridge from Texas Instruments examine the benefits of isolation in high voltage systems. They also explore the benefits of TI’s integrated transformer technology and how TI’s high voltage isolations can help you streamline your design process, reduce your bill of materials, and ensure reliable and robust system operation.
Apr 27, 2023
36,442 views