feature article
Subscribe Now

Looping the Law

Feedback Drives Design Evolution

Moore’s Law tells us that we should be able to double the number of transistors on a chip every couple of years. And, for about five decades, that has held mostly true. There are corollaries to Moore’s Law (that we have conveniently retrofitted as the years have passed) that say we should get some proportional increase in speed and improvement in power consumption as well. But, still considering all that, Moore’s Law is mainly about lithography – we can print things smaller and smaller on silicon, and we get lots of great benefits when we do.

Nothing in Moore’s Law says we’ll be able to do anything useful with all those transistors, however. It’s up to us, as engineers, to figure out how to take advantage of the bounty that Moore’s Law is giving us. At first, it was pretty easy. It didn’t take a lot of imagination or design savvy to get a few dozen, a few hundred, or even a few thousand transistors to work in concert doing something useful and interesting. As we got into the tens and hundreds of thousands, millions, and billions, however, engineering productivity became a serious problem – a serious problem, which, of course, gave birth to the EDA industry.

I once joked that every EDA presentation in history had the same first three slides. First, a slide showing Moore’s Law swooping up and to the right giving us exponentially more transistors to work with as time passed. Second, one with a shallower sloped line showing how many transistors a talented engineer could manage to design-in. “The problem,” these PowerPoint prophets would say, “is the growing gap between the transistors we have and the transistors we can productively use.” The third slide was the payoff – “Thanks to our new tool,” – the EDA marketers would pause and smile at this point – “you’ll be able to close that gap and keep pace with Moore’s Law.”

It didn’t really matter what the tool was at that point – faster HDL simulation, better debug, faster and more powerful synthesis, better/faster place-and-route, higher-level design languages – any new EDA tool or methodology could be promoted based on closing the ominous “design gap.”  Being software, however, one thing that all these EDA tools needed in order to keep pace was processing power – and lots of it.

In the 1990s, a concept that was bantered about frequently was the idea of “white space.” White space was a consequence of the design gap. Since engineering productivity was projected to grow more slowly than the growth in available transistors, we would likely reach a point where we have a surplus of transistors on our chips. This surplus was called the “white space.” What does one do with white space? An early answer was “programmability.” By creating programmable logic cells (which used about 10x the transistors for the same logic function as “hard” logic) we traded white space for the privilege of programmability.

In my first job in 1983 (Oh no! Here comes one of those old-engineer-and-the-sea stories… Yes, it is – and you kids get off my lawn!) Ahem, in my first job in 1983, we were running place-and-route on gate arrays with 10,000 gates. A single run on our VAX 11/760 took on the order of 24 hours. Today, Xilinx FPGAs range up to two million logic elements – so we have to place and route something like 200 times as many objects, even after factoring in the penalty of programmability. If you consider that place-and-route is something like an n-squared compute problem, that means we’d have to do somewhere in the neighborhood of 40,000 times the computation for a place-and-route run today compared to thirty years ago. Yep, on my 1983 VAX 11/760 that would push my 1-day runtimes up to about 100 years!

Luckily, computers and place-and-route algorithms have improved quite a bit over those three decades, so we’re not waiting 100 years for our V2000T runs in Vivado. This, in fact, is where the feedback loop comes in. The chips that we designed with those old VAXes enabled faster computation for the tools that designed the next generation, and so on. Our EDA computing performance had to evolve as fast or faster than the Moore’s Law curve just to keep pace. If we were designing anything more sophisticated than large memories, a plateau in the pace of compute performance would bring the practical use of Moore’s Law to a screeching halt.

About a decade ago, the quest for more processing power took an important conceptual turn. Instead of continuing to chase clock frequencies, we began to pursue parallelism. As a result, today, rather than 8GHz monolithic processors, we have quad-core 3GHz chips. For many types of applications, this shift was benign. Two processors could go just about twice as fast as one. For many EDA algorithms, however, this was definitely not the case. Highly complex algorithms like logic synthesis or place-and-route were not easily broken down into independent, parallelizable chunks. They were performance- and capacity-limited by the speed of individual cores, and those cores weren’t getting faster at the usual pace. We were in serious danger of hitting the productivity plateau.

Luckily, EDA was working quietly in the background to leap over this barrier. Synopsys, for example, built a capability called “Automatic Compile Points” into their Synplify FPGA synthesis software. It could automatically partition designs into chunks that could then be synthesized in parallel on multiple processors. This allows synthesis to take advantage of parallel computing and allows us to leap over the barrier caused by slow improvement in monolithic processor performance. Similarly, many other EDA tools re-vamped themselves to adapt to this newly-popular parallel computing infrastructure – with varying levels of success. For most EDA tools, this kind of change alters the fundamental architecture of the algorithm, requiring substantial rework of existing technologies.

Beyond simple compute performance, our engineering productivity has grown in other ways. Our level of design abstraction has increased dramatically – from transistors and gates plopped down on schematics in the 80s to gate- and register-level abstractions in the 90s to high-level IP blocks and algorithmic descriptions today. Without these kinds of design super-power tools and techniques, we simply would not be able to use all the cool stuff that Moore’s Law gives us. Short of simply filling up our SoC designs with more and more memory (for which our appetite has yet to wane), we need to continue to ramp our design capabilities at a substantial rate. That means (in part) not falling too deeply in love with the way we do any part of the design process today, as that process will need to change in order for us to keep up with the pace of progress. 

If we ever pause, the back of this loop will come around and hit us in the head. The fuel that drives Moore’s Law is the money derived from the productive use of what it does for us. That use depends on the capabilities of engineers and their tools to create new and better things out of the raw materials that Moore’s Law gives us – and to design the computing machines that will enable the engineers of the future to outpace our own achievements. Next time you’re worried about whether the end of the exponential era will be caused by the failure of extreme UV, or the difficulties of triple-patterning, you should take a look in the mirror. The real problem might be closer to home than you think.

One thought on “Looping the Law”

  1. Designing bigger chips requires building faster computers which requires bigger chips. Will our engineering productivity continue to outpace Moore’s Law?

Leave a Reply

featured blogs
Oct 3, 2024
Someone with too much time on his hands managed to get Linux to boot on an Intel 4004 in only 4.76 days...

featured paper

A game-changer for IP designers: design-stage verification

Sponsored by Siemens Digital Industries Software

In this new technical paper, you’ll gain valuable insights into how, by moving physical verification earlier in the IP design flow, you can locate and correct design errors sooner, reducing costs and getting complex designs to market faster. Dive into the challenges of hard, soft and custom IP creation, and learn how to run targeted, real-time or on-demand physical verification with precision, earlier in the layout process.

Read more

featured chalk talk

Advances in Solar Energy and Battery Technology
Sponsored by Mouser Electronics and onsemi
Passive components will play an important part in the next generation of solar and energy storage systems. In this episode of Chalk Talk, Amelia Dalton, Prasad Paruchuri from onsemi, Walter Fusto from Würth Elektronik explore trends, challenges and solutions in solar and energy storage systems. They also examine EMI considerations for energy storage systems, the benefits that battery management systems bring to these kinds of designs and how passive components can make all the difference in solar and energy storage systems.
Aug 13, 2024
26,624 views