feature article
Subscribe Now

No More Nanometers

It’s Time for New Node Naming

“I learned how to measure before I knew what was size.” – Sofi Tukker, “House Arrest”

Let’s start by speaking some truth. Nothing about the “5 nanometer” CMOS process has any real relationship to five actual nanometers and transistor size. That train jumped off the rails years ago, and the semiconductor industry has inflicted tremendous self-harm by perpetuating the nanometer myth. 

I hear you. “The world has a billion problems, and nanometer nodes ain’t one!” But, hear me out. In the early, heady days of Moore’s Law, it made sense to characterize processes by gate length (Lg). We had about a gazillion semiconductor fabs around the world, and they needed some standardized way to, well, do anything at all, actually. But as the decades have marched past, describing semiconductor processes with length metrics based hypothetically on gate size has long since veered into the land of fiction.

Intel held the line from “10 micron” in 1972 through “0.35 micron” in 1995, an impressive 23-year run where the node name matched gate length. Then, in 1997 with the “0.25 micron/250 nm” node they started over-achieving with an actual Lg of 200 nm – 20% better than the name would imply. This “sandbagging” continued through the next 12 years, with one node (130nm) having  Lg of only 70nm – almost a 2x buffer. Then, in 2011, Intel jumped over to the other side of the ledger, ushering in what we might call the “overstating decade” with the “22nm” node sporting an  Lg of 26 nm. Since then, things have continued to slide further in that direction, with the current “10nm” node measuring in with an  Lg of 18 nm – almost 2x on the other side of the “named” dimension.

So essentially, since 1997, the node name has not been a representation of any actual dimension on the chip, and it has erred in both directions by almost a factor of 2. 

For the last few years, this has been a big marketing problem for Intel from a “perception” point of view. Most industry folks understand that Intel’s “10nm” process is roughly equivalent to TSMC and Samsung’s “7nm” processes. But non-industry publications regularly write that Intel has “fallen behind” because they are still back on 10nm when other fabs have “moved on to 7nm and are working on 5nm.” The truth is, these are marketing names only, and in no way do they represent anything we might expect in comparing competing technologies.

You might think, based on that, that Intel would be the ones clamoring for a new way to describe processes, but at the (virtual) Design Automation Conference this week, Dr. H. -S. Philip Wong, Vice President of Corporate Research at TSMC, made a strong case for updating our metrics and terminology. In his keynote speech, he pointed out that three things have always been key in semiconductor value – performance, power, and area. He also highlighted that system performance is based on logic, memory, and the bandwidth of the connection between logic and memory. Finally, he advocated for metrics that would take all of those elements into account. 

This is not a new position for Wong. He co-authored an IEEE paper published April 2020:

  1. -S. P. Wong et al, “A Density Metric for Semiconductor Technology [Point of View],” in Proceedings of the IEEE, vol. 108, no. 4, pp. 478-482, April 2020, doi: 10.1109/JPROC.2020.2981715.

Abstract: Since its inception, the semiconductor industry has used a physical dimension (the minimum gate length of a transistor) as a means to gauge continuous technology advancement. This metric is all but obsolete today. As a replacement, we propose a density metric, which aims to capture how advances in semiconductor device technologies enable system-level benefits. The proposed metric can be used to gauge advances in future generations of semiconductor technologies in a holistic way, by accounting for the progress in logic, memory, and packaging/integration technologies simultaneously.

Despite the fact that Intel is the one most obviously suffering the marketing tyranny of gate-length node naming at the moment, even their primary competitor clearly sees a need for change. But, if Intel, TSMC, and probably Samsung as well all think we should change the system, why have we not? 

For many decades (up until 2016), the International Technology Roadmap for Semiconductors (ITRS) told us years in advance what each node should be named. ITRS is a committee that (best I can determine) had regular meetings where they took the previous node name, divided by the square root of two, rounded to the nearest integer, declared that result to be the new node name, and then drank lots of wine. Beginning in 2016, they re-named themselves “International Roadmap for Devices and Systems” – taking on a much broader system-level charter, and assuring that they’d have an excuse to drink wine well beyond the impending end of Moore’s Law. Editor’s note – some actual facts may have been omitted from this paragraph.

But, as Wong points out, to really evaluate a semiconductor technology platform, we have to look well beyond the number of transistors we can cram on a monolithic piece of silicon. We need to look at all the elements that define system-level performance and capability and account for all of those. Beyond the usual performance, power, and area of monolithic silicon, we have packaging technology that allows us to stack more (and more varied) die in a single package, interconnect technology that improves the bandwidth between system elements, architectural and structural improvements to semiconductors that are not related to density, new materials that improve the speed and power efficiency – the list goes on and on.

I saw a seemingly well-informed discussion among technical experts the other day debating the number of atoms that span five nanometers, with the apparent underlying context that once we reach “5 nanometer” process technology, Moore’s Law will be definitely done for – on the authority of Physics. And, because we have been conditioned to connect “progress” exclusively to the Moore’s Law march toward increased transistor density, the fallacy that followed is that progress in semiconductor-based systems will soon stall. 

In reading this discussion, however, something rang familiar. It tugged on threads of neurons nestled deep within my professional past – neurons that had not frequently fired since the early 1980s. These were pre-PowerPoint days, so my engineering team was nestled in a cozy conference room with the background fan hum of an overhead projector displaying transparency foils. The speaker was writing on them with a grease pencil as he spoke, and turning a crank to scroll the previous information up, making way for new. He was a PhD research scientist from IBM labs, and a recognized expert in semiconductor technology. Over the course of several linear feet of transparency material, he had walked us through a narrative of technical challenges the industry had overcome in the remarkable ramp to the current-at-the-time “3-micron” technology. 

Then, his presentation turned toward an ominous warning. We were nearing the end of Moore’s Law. “One micron is the physical limit,” he explained. “Physics clearly shows that it is impossible to go below one micron, so Moore’s Law should be over by 1985.” At that point, he reasoned, we would no longer be able to cram more transistors into the same silicon area. Instead, future integration gains would have to be achieved by larger wafers, higher yields, and wafer-scale integration. 

Moore’s Law continued going strong for the next thirty years.

Now, when I say “going strong,” I mean it aged fairly gracefully. Moore’s Law began to lose some of the spring in its step as Dennard Scaling failed around 2006. Originally, the concept of Moore’s Law was built around scaling lithography for maximum cost-efficiency, literally “cramming more components” into the same silicon area. However, the world quickly learned that Moore brought us more than simply reduced cost. Each time we scaled, we increased component density and lowered cost, but we also got similar boosts in performance and power efficiency. These performance, power, and area (PPA) gains became veritable entitlements, and engineers began to simply take for granted that there would be exponential improvement in all three with every new node. In fact, “area” – the original theme of Moore’s Law – was relegated to third billing in the PPA abbreviation.

Even though “power” was added to the list after the fact, it hit the wall first. Dennard scaling failed in around 2006, not because we couldn’t pack more transistors into the same silicon area, and not because we couldn’t make transistors switch any faster, but because that many transistors in that area switching that fast generated too much heat. In the decade and a half since then, the PPA bounty of Moore’s Law has become conditional. We have to trade off between P, P, and A in order to fit our design goals. We no longer get everything for free.

This realization was a wakeup call for engineers. For the previous three decades, there was little reason to optimize … anything, really. After all, why spend a huge amount of energy doing a 15-20% improvement that would simply be obliterated by the 2x bounty of the next Moore’s Law node, and another 2x two years after that? We simply focused on building the functionality we needed and relied on lithography progress to make it faster, cheaper, and more power efficient. Now, however, we engineers couldn’t just “phone it in” anymore. We had to come up with new and novel ways to improve performance and reduce power consumption, rather than relying on the penumbra of Moore’s Law to get us past the finish line in our system designs.

The “cost” leg of the Moore’s Law stool didn’t hold up that well either. While increased density did continue to make unit costs lower, the non-recurring costs to create a new chip skyrocketed. In order to keep the machine of Moore moving forward, we had to perform literal magic in design and fabrication, with technologies such as optical proximity correction, multi-patterning, and extreme ultraviolet (EUV). Non-recurring costs became so enormous that amortizing them across a production run overtook unit cost savings in all but the highest-volume, most performance-critical designs. The net effect was that most of the industry fell off the latest process node and experienced their own “end” of Moore’s Law for economic, rather than technical reasons.

We have evolved well beyond Moore’s Law already, and it is high time we stopped measuring and representing our technology and ourselves according to fifty-year-old metrics. We are confusing the public, harming our credibility, and impairing rational thinking about the multi-dimensional, multi-disciplined problem of continuing the remarkable progress the electronics industry has made over the past half century. 

 

5 thoughts on “No More Nanometers”

  1. Spot on, Kevin, as usual. Great article. For those of us of a similar vintage (I heard at university in the UK about how 1um was going to be the end of the road for Moore’s Law), this is a great history of what happened and implicitly of the mostly unsung heroes who managed to extend it 30 years. And I suspect you’re also right about the ITRS process…

  2. “ITRS … and then drank lots of wine”, Lol!

    Anyway, in between there was one based on “half metal pitch”, I believe, from a time when memory transistors were the higher density kinds.

    In any case, even that went totally out of sync with what the real situation was, as you correctly point out. So yes, there are good reasons to define a new metric.

Leave a Reply

featured blogs
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...
Apr 30, 2024
Analog IC design engineers need breakthrough technologies & chip design tools to solve modern challenges; learn more from our analog design panel at SNUG 2024.The post Why Analog Design Challenges Need Breakthrough Technologies appeared first on Chip Design....

featured video

Introducing Altera® Agilex 5 FPGAs and SoCs

Sponsored by Intel

Learn about the Altera Agilex 5 FPGA Family for tomorrow’s edge intelligent applications.

To learn more about Agilex 5 visit: Agilex™ 5 FPGA and SoC FPGA Product Overview

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

E-Mobility - Charging Stations & Wallboxes AC or DC Charging?
In this episode of Chalk Talk, Amelia Dalton and Andreas Nadler from Würth Elektronik investigate e-mobility charging stations and wallboxes. We take a closer look at the benefits, components, and functions of AC and DC wallboxes and charging stations. They also examine the role that DC link capacitors play in power conversion and how Würth Elektronik can help you create your next AC and DC wallbox or charging station design.
Jul 12, 2023
34,048 views