The title may have put you off. In fact, it probably should have. After all, most of us in the press/analyst community have – at one time or another during the past decade or two – been walking around like idiots wearing sandwich signs saying, “The End is Nigh!” And, we got just about as much attention as we deserved. “Yawn, very interesting, press and analysts, and now back to planning the next process node…”
It gets worse. Predicting that Moore’s Law will end is pretty much a no-brainer. It’s about as controversial as predicting that a person will die… someday. There is obviously some point at which the laws of physics and the reality of economics will no longer allow us to double the amount of stuff we put on a single chip every two years. The question is – when will we reach that point, and how will we know we are there?
The end of Moore’s law won’t be like a sudden train derailment – sending cars crashing into one another while the whole thing explodes into a fiery ball. It will be more like – sunset, with the white-hot light of day slowly fading through an array of vivid colors into the long, warm darkness of ubiquitous commodity semiconductor processes.
FPGAs may well be the cicadas whose fading songs signal the beginning of that technological twilight. For the better part of three decades, the FPGA industry has been both driving and driven by Moore’s Law. On each two-year cycle, the company that was able to claim the next node first could claim victory, and that triumph would be felt in their future financial performance. Being the first to roll out chips with double-the-everything-for-cheaper was a sure-fire formula for winning the high-end sockets that would lead to later big-volume, high-margin sales.
The parade of press releases chronicled the biennial battle with the enthusiasm of home-team sportscasters at a championship game. “Ours are first! Ours are fastest! Ours are YOURS!” The flurry of superlatives never failed to captivate and confuse us, and the motto “Declare victory early and often” seemed to be eternally engraved in the rulebook for FPGA market competition.
Now, however, that frenzied communications cacophony has taken on a slower pace and a more studied tone. Yes, there is still a perfectly viable next-node race going on, with FinFETs sweetening the pot – Xilinx riding on TSMC’s 16nm FinFET process vs Altera’s Intel 14nm Tri-Gate mount. Both companies and both semiconductor fabs have had challenges with that node that have slid back original schedules, and it is not the least bit clear at this point which of the two big players will emerge with the first and/or best FinFET-having mind-boggling FPGAs.
But the landscape surrounding that race is what has changed, dramatically. And it has changed enough where one can hear the din of the cicadas fading slowly in response to the setting sun of fifty years of Moore’s Law.
For starters, Altera completely took a pass on the 20nm node with their high-end Stratix family, choosing instead to roll only their Arria 10 mid-range devices on that process. Their Stratix line held with their 28nm offerings, with all the effort on the upcoming Intel 14nm project. Xilinx relied heavily on interposer-based multi-die technology to get the impressive numbers with their latest announcement. Both companies are hedging their FinFET bets, and they are laying the groundwork for a new status-quo by explaining to customers how 28nm will be a “long-lived” node.
28nm will be “long-lived” for several reasons. First, the next node will most definitely be later than usual. The Moore’s Law bell has rung, and we’re not looking at any new chips yet. And, we almost certainly won’t be until sometime next calendar year. Even then, only the super-big, super-fast high-end FPGAs will be fabricated on the profusely-bleeding-edge FinFET technologies. For smaller devices, the economics are better at 28nm, and probably will be for a while.
For the past several nodes, the gifts of Moore’s Law have been less lavish, and the cost has been considerably higher. In the good ‘ol days, we got double everything – double speed, double power efficiency, double density (and therefore half the cost) – there was nothing not to like. Then slowly, node-by-node, Mr. Moore became increasingly stingy. Chip designers had to trade off between leakage current and speed, between density and power consumption, between everything and everything else. Instead of getting everything doubled, we got to choose our favorite two, then our favorite one, and finally, with 20nm, there was a bit of a question as to what we were really gaining, and whether it offset the dramatically higher costs.
For semiconductor companies, FPGAs have been the go-to test pilots for each new process. FPGAs have enough variety of structures – LUT fabric, memory, processor cores, DSP cores, fancy IO, SerDes transceivers, and even some analog. With an FPGA, you could try out your newest semiconductor process on most of the elements that would be included in the really heartbreaking SoCs coming down the pipe from companies who make things like smart phones, with a much lower level of overall complexity. All of that step-and-repeat replicated-cell LUT fabric made for some nice low-drama die-filling. And, FPGAs had the economy of scale to ramp up to decent production numbers without pulling a millions-of-units vacuum on the fab before the process was mature.
As the non-recurring-engineering cost for each new node has continued exponentially upward, propelled most recently by soul-crushing complexities like double-, triple- and even possibly quadruple-patterning, the number of companies with the resources to actually complete a chip design on those nodes has diminished. We may be in danger of, to paraphrase Yogi Berra “Nobody designs high-end chips anymore because it’s too expensive.” At the same time, packaging has reached record levels of cost and complexity, leading us to a point where the resources and wherewithal required to produce a leading-edge chip (or chips) packaged in a state-of-the art module are simply staggering.
As the field of players designing SoCs (or SiPs) on leading processes narrows to the very largest system suppliers like Apple, Qualcomm, and the like, and with FPGA companies leading the charge into each node ahead of those massive-volume players, we expect to see the FPGA companies as the first to blink (maybe several times) at the idea of continuing to the next node. Sure, we are likely to see 10nm and maybe even 7nm ICs at some point (although most definitely not at the historical 2-year tempo), but it’s pretty unclear if there’s a step after that.
The FPGA companies are already starting that blinking here at the 14/16nm node (unless they just got something in their eyes for a minute there), by spreading their offerings out on a range of historical processes, relying more on interposer-based packaging to ratchet up the capabilities, slowing down the pace of new family introductions, and beginning to market their software, IP, and other differentiators far more than the bare-metal capabilities of the silicon itself.
The world will not end as the light fades on Moore’s Law, however. We should remember that sunset in one place is sunrise somewhere else. When we can’t sit back in our recliners and get 2x everything for free every two years, we may have to earn the next level of innovation on our own – and that will be exciting to see.
13 thoughts on “The Sun Sets on Moore’s Law”
A bigger question is simpler … do we even need 2X every two years.
We have seen a rapid rise in User Interface complexity from serial RS232 character terminals, to real-time computer generated, high resolution stereo, streaming video in virtual reality games enabled by Moore’s Law over the last 30 years.
What is the next commodity application that could possibly pay for, and be enabled by two or three more generations of Moore’s law? Even the most intensive Virtual Reality gaming applications are solvable today and only cost reduced by another generation or two of Moore’s law cycles.
Can we even engineer applications to take advantage of more complex dies over 6-10 more Moore’s law cycles, given finite productivity of engineers and programmers? Can engineering team sizes even grow at (2^N) for large die projects?
At some point with Moore’s law, the cost of the silicon is significantly cheaper than the engineering costs to use it, even for commodity high volume applications. IE SiliconCost/(2^N) as die size shrinks, while fixed/large die size projects see EngineeringCost*(2^N) for N more Moore’s Law cycles.
Twin Sons of Different Mothers (music reference likely lost on audience): hundreds of miles away on the very same day, my article on the same topic as Kevin’s:
His article is clearly better written; mine DOES win on # of bullet points.
Interesting article and well written Kevin. Let me open by stating that the FPGA vendors have already blinked. For example, in the latest Xilinx product release not all devices were scheduled to release to the 20nm node. In fact, the FPGA vendors are generally downplaying “I’m at the next node first” chest thumping and are focusing much more on hardware feature differentiation and also becoming more “software ready” for a potentially new breed of FPGA users.
The sun has been setting on Moore’s Law, from an economical standpoint, for many years now. The higher cost of the next node, coupled with risky hardware development/deployment has played a big role in seeing more and more overall electronic system content move to software.
What may extend Moore’s Law beyond the traditional monolithic 2D die shrink will be advances in 3D processing technology coupled with advances in 2.5/3D packaging technologies. If the industry can truly collaborate so that these technologies can present themselves reliably and more affordably we could see more interest in hardware implementation.