feature article
Subscribe Now

Re-interpreting Moore’s Law

The MacGuffin that Changed the World

For two or three decades, there has been raging debate about the longevity and relevance of Moore’s Law. Is it dead? Has it changed? Is it slowly fizzling out? Was it a law or just a projection? Is it really about transistor density only, or something more conceptual? Did Moore really say “doubles every two years” or was it more like 18 months? Was Moore’s Law really invented by Moore, or by Carver Mead?

Moore’s article, “Cramming more components onto integrated circuits” – published in the April 19, 1965 issue of Electronics magazine, has achieved a status not unlike religious tomes, with generations of analysts picking at the nuance of every word. The practical upshot, though, is that we collectively came to expect that integrated circuits would double in density about every two years, and that would bring with it a corresponding improvement in cost, capability, performance, and power consumption.

It’s easy to get caught up in semantic arguments about the higher meaning of a fifty-plus-year-old magazine article. But Moore’s Law has been the MacGuffin of an entire industry for five decades. “What are we doing next? Well, Moore’s Law says we need to double the density again, and we’ve only got 23 months left. We’d better get to it!” Moore’s Law has been the reason, the excuse, the motivator, the savior, the law – that has driven several generations of electronics and software engineers to challenge themselves and their craft, both individually and as a group, to achieve something unprecedented in human history.

And, while Moore’s Law came to be interpreted mostly as a prediction on the progress of lithography, an entire engine of innovation built up around that backbone that had nothing to do with transistor density. Our processor architectures evolved by leaps and bounds. Our ability to design complex digital systems exploded, with design automation tools giving us levels of productivity we would never have conceived of just a few years before. Our software technology improved by orders of magnitude. And, there was something else – the bubbling cauldron of all those exponential progress vectors created a transcendence, an increase in technological capability that nobody could have predicted.

It’s difficult to find an analyst today who will steadfastly claim that Moore’s Law isn’t at least waning. Intel has lengthened their new-node guidance to three years rather than the traditional two. Numerous applications are “staying back” on larger geometries because the economics are better, the technology is a better fit, or the risk is lower. The number of fabs in the world who are even trying to keep up with Moore’s Law is diminishingly small. The cost to bring up a fab on a new process node exceeds the GDP of many small countries. And, with each new generation, the incremental benefits in cost, power, and performance are smaller, and the cost to realize them exponentially higher.

The actual node names have long since lost their meaning. While marketers debate whether TSMC’s 7nm is really the same as Intel’s 10nm, the fact is that neither process maps well to its name. You’ll be hard pressed to find any important structure in TSMC that actually measures 7nm, or in Intel’s that actually measures 10nm. Semiconductor manufacturers have long been simply assigning the “expected” name to the next node, and then doing their best to make it have the best price, power, and performance specs they can manage in the allotted time. There’s no real audit or accounting for the actuals of any new generation. The proof is in the performance, power, and cost delivered to customers, so about the best we can do is to compare similar devices delivered on subsequent nodes.

But even measuring the performance, cost, and power consumption of integrated circuits doesn’t capture the true effects of the Moore’s Law MacGuffin. Because the entire industry has normalized exponential improvement – the penumbra of progress has extended to memory and storage, system architecture, connectivity, software complexity and capability, and discontinuous innovations such as AI. Even if lithography suddenly stopped forever, we’d have incredible inertia in technological progress from the culture of innovation that’s formed around the core of Moore’s Law.

With Moore’s Law declining, we are at the cusp of a complete revolution in system architecture that will be felt for decades. The von Neumann architecture that has dominated for years is giving way to heterogeneous processing with a variety of accelerators, which is altering the structure of even the most basic systems. Neural networks are gaining traction and capability so fast that entire new digital architectures are being pioneered to accelerate and take advantage of them. The feedback loop generated by these evolutions will likely have results we can’t predict right now, except to say that we will continue to have exponential progress in system capability.

And that’s what Moore’s Law was all about in the first place.

Wikipedia says. “In fiction, a MacGuffin is is a plot device in the form of some goal, desired object, or another motivator that the protagonist pursues, often with little or no narrative explanation.” Truly, by inspiring several entire generations of engineers to repeatedly challenge themselves against a seemingly unachievable goal, Moore’s Law altered the course of history, by doing nothing more than counting transistors.

Re-reading “Cramming more components…,” it is fascinating to note that, while the piece is revered as a miracle of modern forecasting, the text itself is humble and unpretentious. It reads as more of an exploration of possibility than as a prediction. Moore is essentially saying that he doesn’t see a reason that exponential scaling wouldn’t be achievable, and that, if it were, the possibilities would be almost unimaginable.

Today, that notion is just as valid.

2 thoughts on “Re-interpreting Moore’s Law”

  1. https://arxiv.org/ftp/arxiv/papers/1801/1801.05215.pdf

    “However, the flip side of the death of Moore’s Law will be a significant decrease in cost of chip fabrication, since manufacturing industries will not need to replace their equipment so often, and their investments in process technology will be drastically reduced.
    Under this scenario, transistors will be very cheap, much cheaper than they already are, and this will open the door for many new opportunities to use them in specialized units.”

Leave a Reply

featured blogs
Aug 18, 2018
Once upon a time, the Santa Clara Valley was called the Valley of Heart'€™s Delight; the main industry was growing prunes; and there were orchards filled with apricot and cherry trees all over the place. Then in 1955, a future Nobel Prize winner named William Shockley moved...
Aug 17, 2018
Samtec’s growing portfolio of high-performance Silicon-to-Silicon'„¢ Applications Solutions answer the design challenges of routing 56 Gbps signals through a system. However, finding the ideal solution in a single-click probably is an obstacle. Samtec last updated the...
Aug 17, 2018
If you read my post Who Put the Silicon in Silicon Valley? then you know my conclusion: Let's go with Shockley. He invented the transistor, came here, hired a bunch of young PhDs, and sent them out (by accident, not design) to create the companies, that created the compa...
Aug 16, 2018
All of the little details were squared up when the check-plots came out for "final" review. Those same preliminary files were shared with the fab and assembly units and, of course, the vendors have c...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...