feature article
Subscribe Now

Re-interpreting Moore’s Law

The MacGuffin that Changed the World

For two or three decades, there has been raging debate about the longevity and relevance of Moore’s Law. Is it dead? Has it changed? Is it slowly fizzling out? Was it a law or just a projection? Is it really about transistor density only, or something more conceptual? Did Moore really say “doubles every two years” or was it more like 18 months? Was Moore’s Law really invented by Moore, or by Carver Mead?

Moore’s article, “Cramming more components onto integrated circuits” – published in the April 19, 1965 issue of Electronics magazine, has achieved a status not unlike religious tomes, with generations of analysts picking at the nuance of every word. The practical upshot, though, is that we collectively came to expect that integrated circuits would double in density about every two years, and that would bring with it a corresponding improvement in cost, capability, performance, and power consumption.

It’s easy to get caught up in semantic arguments about the higher meaning of a fifty-plus-year-old magazine article. But Moore’s Law has been the MacGuffin of an entire industry for five decades. “What are we doing next? Well, Moore’s Law says we need to double the density again, and we’ve only got 23 months left. We’d better get to it!” Moore’s Law has been the reason, the excuse, the motivator, the savior, the law – that has driven several generations of electronics and software engineers to challenge themselves and their craft, both individually and as a group, to achieve something unprecedented in human history.

And, while Moore’s Law came to be interpreted mostly as a prediction on the progress of lithography, an entire engine of innovation built up around that backbone that had nothing to do with transistor density. Our processor architectures evolved by leaps and bounds. Our ability to design complex digital systems exploded, with design automation tools giving us levels of productivity we would never have conceived of just a few years before. Our software technology improved by orders of magnitude. And, there was something else – the bubbling cauldron of all those exponential progress vectors created a transcendence, an increase in technological capability that nobody could have predicted.

It’s difficult to find an analyst today who will steadfastly claim that Moore’s Law isn’t at least waning. Intel has lengthened their new-node guidance to three years rather than the traditional two. Numerous applications are “staying back” on larger geometries because the economics are better, the technology is a better fit, or the risk is lower. The number of fabs in the world who are even trying to keep up with Moore’s Law is diminishingly small. The cost to bring up a fab on a new process node exceeds the GDP of many small countries. And, with each new generation, the incremental benefits in cost, power, and performance are smaller, and the cost to realize them exponentially higher.

The actual node names have long since lost their meaning. While marketers debate whether TSMC’s 7nm is really the same as Intel’s 10nm, the fact is that neither process maps well to its name. You’ll be hard pressed to find any important structure in TSMC that actually measures 7nm, or in Intel’s that actually measures 10nm. Semiconductor manufacturers have long been simply assigning the “expected” name to the next node, and then doing their best to make it have the best price, power, and performance specs they can manage in the allotted time. There’s no real audit or accounting for the actuals of any new generation. The proof is in the performance, power, and cost delivered to customers, so about the best we can do is to compare similar devices delivered on subsequent nodes.

But even measuring the performance, cost, and power consumption of integrated circuits doesn’t capture the true effects of the Moore’s Law MacGuffin. Because the entire industry has normalized exponential improvement – the penumbra of progress has extended to memory and storage, system architecture, connectivity, software complexity and capability, and discontinuous innovations such as AI. Even if lithography suddenly stopped forever, we’d have incredible inertia in technological progress from the culture of innovation that’s formed around the core of Moore’s Law.

With Moore’s Law declining, we are at the cusp of a complete revolution in system architecture that will be felt for decades. The von Neumann architecture that has dominated for years is giving way to heterogeneous processing with a variety of accelerators, which is altering the structure of even the most basic systems. Neural networks are gaining traction and capability so fast that entire new digital architectures are being pioneered to accelerate and take advantage of them. The feedback loop generated by these evolutions will likely have results we can’t predict right now, except to say that we will continue to have exponential progress in system capability.

And that’s what Moore’s Law was all about in the first place.

Wikipedia says. “In fiction, a MacGuffin is is a plot device in the form of some goal, desired object, or another motivator that the protagonist pursues, often with little or no narrative explanation.” Truly, by inspiring several entire generations of engineers to repeatedly challenge themselves against a seemingly unachievable goal, Moore’s Law altered the course of history, by doing nothing more than counting transistors.

Re-reading “Cramming more components…,” it is fascinating to note that, while the piece is revered as a miracle of modern forecasting, the text itself is humble and unpretentious. It reads as more of an exploration of possibility than as a prediction. Moore is essentially saying that he doesn’t see a reason that exponential scaling wouldn’t be achievable, and that, if it were, the possibilities would be almost unimaginable.

Today, that notion is just as valid.

5 thoughts on “Re-interpreting Moore’s Law”

  1. https://arxiv.org/ftp/arxiv/papers/1801/1801.05215.pdf

    “However, the flip side of the death of Moore’s Law will be a significant decrease in cost of chip fabrication, since manufacturing industries will not need to replace their equipment so often, and their investments in process technology will be drastically reduced.
    Under this scenario, transistors will be very cheap, much cheaper than they already are, and this will open the door for many new opportunities to use them in specialized units.”

Leave a Reply

featured blogs
Sep 28, 2022
Learn how our acquisition of FishTail Design Automation unifies end-to-end timing constraints generation and verification during the chip design process. The post Synopsys Acquires FishTail Design Automation, Unifying Constraints Handling for Enhanced Chip Design Process app...
Sep 28, 2022
You might think that hearing aids are a bit of a sleepy backwater. Indeed, the only time I can remember coming across them in my job at Cadence was at a CadenceLIVE Europe presentation that I never blogged about, or if I did, it was such a passing reference that Google cannot...
Sep 22, 2022
On Monday 26 September 2022, Earth and Jupiter will be only 365 million miles apart, which is around half of their worst-case separation....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Battery Management System Overview

Sponsored by Infineon

Effective battery management for electric vehicles is a critical design element faced by engineers today. In this episode of Chalk Talk, Amelia Dalton chats with Marco Castellanos from Infineon about the key functions of battery management for electric vehicles, the role that cell balancing, voltage measurement and temperature measurement play in battery management ICs, and how wireless battery management using bluetooth low energy can help you tackle a variety of battery management challenges for your next design.

Click here for more information about Infineon Battery Management ICs