feature article
Subscribe Now

Intel’s Grand Vision

It Was Never About Moore’s Law

“With unit cost falling as the number of components per circuit rises, by 1975 economics may dictate squeezing as many as 65,000 components on a single silicon chip.”

– Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics, pp. 114–117, April 19, 1965.

Our shuttle bus driver exited 101 and began to weave through an increasingly narrow labyrinth of residential streets. There were eight or nine journalists on the bus, and none of us had any idea where we were headed. Apparently, the driver didn’t either. Intel’s invitation had been obviously deliberately vague, informing us only that a shuttle bus would arrive at our hotel to transport us to “the venue” and that “the venue” was NOT on the nearby Intel campus. As the shuttle bounced down a series of skinny two lane roads, it gradually became apparent that we were lost and, after making a couple of awkward U turns, the shuttle driver pulled over and called someone on his cell phone. “Yep, I just went past there… OK, I think I see you up the hill.”

We wound up a narrow nearby driveway into a small parking area and disembarked into a beautiful residential estate. It was the former home of Intel co-founder Robert Noyce. Intel had brought a small group of journalists here for an extraordinary event, modestly billed as “Architecture Day.” The intimate venue was buzzing with Intel staff zipping here and there, firing up demo stations, tweaking the presentation area, working with caterers, and coordinating photographers and videographers who had been hired to document the proceedings.

Intel’s technology brain trust was in the house, milling around amongst the arriving journalists. Raja Koduri – Intel chief architect, kicked off the presentations with a broad view of Intel’s architecture strategy. Koduri came to Intel about a year ago, in November 2017, from AMD’s Radeon group, where he led AMD’s APU, discrete GPU, and semi-custom and GPU compute products. Before that, he was director of graphics architecture at Apple, where he drove the graphics subsystem for the Mac product family, leading the transition to Retina computer displays.

Koduri described Intel’s strategic technology vision in terms of six pillars – process, architecture, memory, interconnect, security, and software. He then painted a picture of the global computing infrastructure domains from client devices through the edge/network layers back to the data center/cloud. He cross-divided compute architectures into scalar, vector, matrix, and spatial constructs, and he asserted that the future of computing involved a heterogeneous combination of those architectures working together across those domains, with each architecture taking on appropriate types of workloads, and with each part of the computing problem solved at the appropriate distance from the end user.

Koduri then went through each of his six pillars, explaining the role of that pillar in solving the compute challenge – and announcing several new Intel technologies along the way (more on those soon). Clearly, Intel’s strategy is to work to gain and maintain leadership positions in each of those six pillars, and to use that strength to make the world a better place. Oh, and also to continue to make enormous piles of cash.

In short, Koduri said Intel wanted to ultimately provide every end user with access to 10 Peta FLOPS of compute power, accessing 10 Peta Bytes of data, all less than 10 milliseconds away.

This appears to be Intel’s version of JFK’s 1961 “I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.” – a simply stated but ambitious goal, a stake in the ground – designed to rally and inspire a sleeping giant into decisive action. Koduri didn’t add JFK’s “before this decade is out” which, for Intel, is probably a good thing.

How did the assembled press corps respond to this sweeping and ambitious vision and challenge Intel laid out for themselves? There was really only one question. To paraphrase, “What went wrong and how are you going to fix it?”

Why is this the question?

Over the five-decades of Moore’s Law, much of the industry – and the trade press along with it – has gotten lazy. When a genie comes along once every two years and bestows a magical doubling of everything good, it’s difficult for the engineering community to be inspired to come up with new ways to squeeze out a meager 5% or 10% improvement in anything, particularly when most of your energy is consumed just unwrapping the gifts that are falling from the sky. Industry press and analysts trained ourselves to play this game as well, normalizing the notion that the only relevant question is “How are you doing on the next process node?”

The answer to that question came to be so important in terms of predicting the winners and losers and the overall rate of progress in technology that all other avenues of investigation and inquiry decayed and fell by the wayside. Want to know how Intel is doing? Find out if they are ahead or behind in shipping the next node. Everything else is noise.

Now that Intel is infamously late delivering “10nm”, there can be only one answer to the question of how Intel is doing. Thus, the press is reduced to asking, “What went wrong?”

Let’s ignore Intel’s steady stream of record revenue quarters. We can also glaze over the fact that the company has a dominant presence in technology for the data center, that they absolutely own the architecture for every part of the compute infrastructure except the edge device itself, and that they have groundbreaking developments in compute architecture, memory, storage, networking, the software infrastructure that stitches all those together, and practically across all six of the pillars Intel is using to define the compute technology space.

If the company has missed a Moore’s Law milestone, the press reasons, they have failed. No other information is important. And, perhaps that’s the reason Intel invited us up that hill in the first place and served us fish and wine in the former home of the father of Silicon Valley – to attempt to broaden our vision beyond the single-digit scoreboard of the tick-tock-tock of Moore’s Law, to get the analysts and writers assembled to consider the possibility that Intel is not quite dead yet, and to think about the emerging era of globally distributed heterogeneous computing with a renewed breadth and sophistication that transcends the limitations of lithography.

Wait, when did you say 7nm will be shipping?

Earlier this year, Intel hired former AMD Zen designer Jim Keller away from Tesla and made him senior VP in charge of silicon engineering. Keller took the floor after Koduri and exuded charm as his talk took a wild-ride-ramble through the convergence of his career and Intel’s need to convince analysts, and therefore investors, that Intel was bringing in the big guns, correcting course on this process node thing, and that the ship would be back on track any time now. See? It’s turning already.

From the beginning, Moore’s Law was a race.

In his landmark 1965 article that later became the defining document of “Moore’s Law,” Intel co-founder Gordon Moore made a remarkable number of incredibly insightful predictions. The majority of them later came true. Interestingly, the one that came to be known as “Moore’s Law” was not one of those. Moore’s Law has come to be understood as “a doubling of the number of transistors on a high-density chip every two years.” However, Moore’s original article actually said things would progress twice that fast:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years.”

Then, in 1975, at the end of that ten years, in an IEEE Speech called “Progress in Digital Electronics” Moore said the rate would slow to a doubling every 2 years:

“…the rate of increase of complexity can be expected to change slope in the next few years … The new slope might approximate a doubling every two years, rather than every year, by the end of the decade.”

From that date on, the industry circled around that one metric. And there was never any doubt that someday our exponential freight train of semiconductor scaling would go slamming into the wall of physics, bringing the Moore’s Law race to a close. The problem is – when a race ends at a wall – the leader is the first one to slam into the concrete.

The end of Moore’s Law has been a fuzzy wall for sure. Physics didn’t jump in all at once. Thermal limitations caused Dennard Scaling to break down over a decade ago, and the industry started a game of compromise, choosing from power, performance, and cost, depending on system requirements. Economics stepped in next, with skyrocketing costs for lithography advances such as multiple patterning and EUV making “leading edge” processes impractical for all but a very small subset of new chip designs.

Marketers muddied the waters by changing their logic in node naming, associating the “nm” numbers with a relative figure-of-merit that took into consideration power, performance, and cost improvements – independent of actual feature size. One company even claimed a new node with a process that used exactly the same feature size as the previous generation, but took advantage of FinFET transistor technology to deliver power and performance advantages. Intel themselves set about explaining to the world that their “10nm” generation was actually comparable to other companies’ “7nm” generation, implying that they were still actually delivering ahead, regardless of what the node names said.

All of this fog brought us to the place we are today, with Intel delivering 10nm – and a host of new innovative technologies that were tied to that train – significantly later than promised or expected. That has placed them in the position of trying to distract the world from the question of “what went wrong?” with a credible, fascinating, and insightful narrative about the future of computing and Intel’s role in it. The truth is, there is much, much more to our technological evolution than transistor density. In fact, the influence of transistor density has been waning for over a decade as other key architectural advances have joined onstage. Intel is right to point that out, and to share their vision of the future and to outline the numerous areas ripe for innovation – in addition to process technology.

But, really, what went wrong?

Intel probably took too big a bite out of the Moore’s Law apple – setting overly ambitious goals for 10nm and finding themselves in a position where they struggled to meet the yield required to ship in volume. They doubled down on that error by having too many other product and technology releases tied to delivery of that 10nm process. While these problems have caused consternation for Intel, their shareholders, and their customers, they do not appear to have had significant impact on the company’s financial performance or their pace of innovation.

During the course of “Architecture Day,” Intel announced a number of new technologies that we’ll be discussing in the coming weeks. New “FOVEROS” face-face 3D packaging technology could represent a major step forward in system-in-package integration, and it could allow a completely different approach to development of ICs. New 112 Gbps transceivers bring unprecedented bandwidth to SerDes connections. New Gen 11 integrated graphics processors will bring TFLOPS performance to small form-factor devices. New “Sunny Cove” Xeon processors will improve latency, throughput, and security in the data center. New improvements in Optane memory will bring more data closer to the CPU for faster processing of bigger data sets like those used in AI and large databases. SSDs based on Intel’s Terabit QLC NAND will move more bulk data from hard disks to SSDs, allowing faster access to that data, and, when combined with Optane memory, will fill critical gaps in the memory storage hierarchy.

On the software side, a new “oneAPI” approach to software development will simplify the programming of diverse computing engines across CPU, GPU, FPGA, AI, and other accelerators. Intel is also releasing the Deep Learning Reference Stack, which is an integrated, open-source stack release designed to ensure that AI developers have easy access to all of the features and functionality of the Intel compute platforms.

All of this we’ll cover in detail in the near future.

But until then? Yeah, 10nm is way late.

One thought on “Intel’s Grand Vision”

  1. Quote”On the software side, a new “oneAPI” approach to software development will simplify the programming of diverse computing engines across CPU, GPU, FPGA, AI, and other accelerators.”
    Phooey!
    First things first: what are they doing to simplify SoC development in the first place?
    Fundamental design 101.
    1) define the data flow
    i) what comes in
    ?) what happens to it
    o) what comes out
    2)
    i) what event starts input
    e) what ends input
    3)
    p) what processing is required
    4)
    o) what event signals end of processing
    s) what status is required at end of processing

    Then comes the definition of the physical interface sequences

    Instead Intel starts with what the programmers need to code the tool.
    Hooey!

    I started in SAGE computer systems in 1963 and liked it so much it is now my hobby
    writing a useful system design/debug/simulation tool.

Leave a Reply

featured blogs
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Unlock the Productivity and Efficiency of a Connected Plant
In this episode of Chalk Talk, Amelia Dalton and Patrick Casey from Schneider Electric explore the multitude of benefits that mobility brings to industrial applications. They investigate how Schneider Electric’s Harmony Hub can simplify monitoring and testing, increase operational efficiency and connectivity openness in industrial plants, and how NFC technology can bring new innovation possibilities to IIoT applications.
Apr 23, 2024
223 views