feature article
Subscribe Now

Too Big to Fail

Intel’s Itanium chip is 10 years old. Ten years of designing and building one of the biggest, fastest, and most complex microprocessors ever made. And 10 years of making excuses for it, too. For Itanium has been a colossal disappointment, not to say embarrassment, for the chip company. It was intended to upend the whole microprocessor industry and finally spell the end of the hated x86. Instead, here we are 10 years on, and Intel is selling more x86 chips than ever while support for Itanium, which was always a bit meager, continues to wane.

What went wrong? How could Intel—a company with more brains than a zombie Thanksgiving—have fouled up so badly? Microsoft is dropping support for Windows on Itanium. Red Hat Linux will no longer support Itanium in the next version. Even Intel itself has discontinued its C compiler. Itanium chip sales have never come close to their expected level, and 95% of the chips that do sell go directly to HP, the company that helped design it in the first place. Like a certain passenger liner, the “Itanic” was the biggest and most advanced design of its day. Now it’s more like a jewel that’s sunk to the bottom of the sea. 

None of this is a reflection on Intel’s engineers. They designed a brilliant and technically advanced device. The newest Itanium chip (code-named Poulson) has a staggering 3.1 billion transistors. It has 50 MB of on-chip memory just in cache. This thing’s so big it beeps when it backs up.

But hardly anybody’s buying it, which tells us that advanced engineering isn’t a guaranteed route to success. What can we, as mere mortal designers and programmers, learn from this?

Take a gander at the chart below (data courtesy of market-research firm IDC). As you can see, Itanium sales (in blue) were supposed off take off faster than a bride’s nightie. Alas, actual sales (in orange) were flat and disappointing. 


Sales of Intel’s Itanium processor family have consistently—and spectacularly—failed to live up to expectation. Ten years on, Itanium sales still don’t meet the projections expected for the first six months.


Let us take a moment to savor the implications of the leftmost line on this chart. If Itanium sales had followed that projected curve and reached $5 billion back in 2000, $15 billion in 2001 and about $37 billion in 2002, today’s sales would be literally off the chart. These breathlessly optimistic projections suggest some manner of pharmaceutical intervention.

Next year’s curve, shown just to the right of the first line, shows barely a modicum of circumspection. Naturally, it shows sales starting a year later, but the ramp is even more optimistic, reaching $30 billion in just two years.

Year by year, we see the projections gradually begin to dwindle and flatten out. The slope gets less aggressive and the sales figures slump by a factor of five or so. Hope and expectation give way to reality and disappointment, in graph form.

Even after Itanium chips actually did start shipping (orange line) and the market researchers presumably had real data to rely upon, sales projections were still off by an order of magnitude. Hope springs eternal.

What Can We Learn From This?

Learned and carefully researched papers can, have, and will continue to be written about Itanium, but we can focus on just a few points that affect us as designers and programmers. With any luck, we can learn from Itanium’s mistakes.

Lesson #1. What problem are you solving? Itanium solved Intel’s problem of how to build a faster chip to compete with the RISC vendors, but it didn’t solve customers’ problems of how to make their PCs run faster. In fact, it did just the opposite. Itanium sacrificed performance on x86 code for performance on (mostly nonexistent) IA-64 code. As a designer or marketing manager, you need to always ask yourself, “What problem am I solving?” If you can’t answer that question, put down your tools and step away from the workbench.

Lesson #2. Better technology doesn’t matter. At least, not all the time—or even very often. Intel and HP were replacing the old x86, the worst CPU architecture in the world. How could they not succeed? The technology, engineering, and design philosophy behind Itanium were all brilliant. But it didn’t matter because customers don’t buy technology. They buy a product, and Itanium wasn’t a product they wanted. Unless you’re a research scientist, technology is a means to an end, not an end in itself.

Lesson #3. Momentum matters. Even though Itanium chips can run x86 code, the early ones didn’t do it very well. Its half-fast performance was on purpose, but the plan backfired. Intel didn’t want Itanium’s x86 performance to be too good, or people wouldn’t have any incentive to switch to IA-64 software. But the company underestimated people’s attachment to their old code. Itanium wasn’t an upgrade for them. From an x86 user’s point of view, Itanium was more expensive but slower—obviously a bad “upgrade.”

Lesson #4. Volume trumps technology. Like any brand new product, Itanium started from zero: zero installed base, zero available software, zero programmer experience, zero history. Compare that to x86, which had (and still has) an awesome ecosystem surrounding it. Practically everyone has used x86 chips or software at some point, and there are gobs of tools, support, and talent to go around. It was like night and day: the best-supported (though hardly best-loved) CPU in the world versus the newest and least-known CPU in the world. They both had Intel logos on top, but otherwise were worlds apart.

Lesson #5. Be careful what you improve. From a technical perspective, Itanium was, and still is, a vast improvement over x86. How could it not be? It includes all the latest thinking about CPU architecture; it had the input of the best minds in the business; it had Intel’s awesome financial and marketing resources behind it. Absolutely everything was new and improved. It was a technical tour de force. And yet what people wanted was a faster x86.

Even what we think of as the x86—that is, a 32-bit CISC processor—is an evolution of the earlier 8086, which was, in turn, an evolution of the 8080, which was based on the 4004, and so on. It’s hard to even count the number of times the x86 has been “stretched” beyond its original design. It’s the ultimate Hamburger Helper processor.

Which is entirely the point. Is it any coincidence that one of the oldest CPUs in existence is also one of the most popular, best-known, and most profitable? Longevity and compatibility do really count for something. The brilliant, modern, clean-sheet design of Itanium failed to even make a dent in sales of the wheezing, clattering ironmongery of the x86.

The history of Itanium is a perfect illustration of Clayton Christensen’s observation that technology improves faster than people need it to. Itanium overshot peoples’ expectations of what a processor should do. So did most RISC processors of the past few decades, which is why they’re not around anymore. Sure, they were all “better” chips from an engineering perspective, but they weren’t better along any axis that the market was measuring. 

Then, as now, nobody wanted to throw away their PC every 2–3 years for entirely new machines just because those machines are “better.” We retain our QWERTY keyboards in spite of “better” and more ergonomic options. We cook in iron pots over open flames when “better” options surely exist. Better isn’t always better.

Among the lessons that Itanium can teach us are to distrust our engineering instincts; to view products from our customers’ point of view; and to respect momentum and inertia. We can easily “improve” our products faster than customers want us to, and we can even more easily deceive ourselves into improving them in entirely the wrong ways.

Engineering can be an exciting means of self-expression, but it needs to be leavened with a dose of old-fashioned humanity. Just because we build it doesn’t mean customers will come. 

One thought on “Too Big to Fail”

  1. IBM System360 learned Lesson #2 when previous 1401 users did no have source code to re-compile, therefore built and sold emulators to run machine code:
    Lesson #2. Better technology doesn’t matter. At least, not all the time—or even very often. Intel and HP were replacing the old x86, the worst CPU architecture in the world. How could they not succeed? The technology, engineering, and design philosophy behind Itanium were all brilliant. But it didn’t matter because customers don’t buy technology. They buy a product, and Itanium wasn’t a product they wanted. Unless you’re a research scientist, technology is a means to an end, not an end in itself.
    Also, this is not the problem:
    “Data dependencies and load/use penalties are just as hard to predict in software as they are in hardware. Will the next instruction use the data from that previous one? Dunno; depends on the value, which isn’t known until runtime. Can the CPU “hoist” the load from memory to save time? Dunno; it depends on where the data is stored. Some things aren’t knowable until runtime, where hardware knows more than even the smartest compiler.”
    The fact is that the next instruction does NOT use the result a significant number of times. And general purpose computing has a branch to non sequential location frequently enough that branch penalties are also significant.
    The GPU and heterogeneous accelerators work because many algorithms only need the amount of data that can be streamed to on chip memory. They do not need to access data that is scattered all over a 64 bit address space. And that data does not have to be shared, therefore multi-level cache coherency is unnecessary in those cases.
    The whole premise of cache was that matrix inversion would access data within the same cache line frequently AND that main memory had to have updated data shared by every user on the system, therefore cache coherency was also a must.
    And now we have RISC V where there are 2 source registers and a destination register or a small immediate constant operand can be in place of one source register for a register immediate.
    And the whole world is enamored with RISC V.
    Give me a break!
    But there is an open source compiler that identifies every variable and constant in the order that they are used. Makes it pretty obvious if the result is used by the next operator. Also branches and loop targets identify potential out of order execution.
    But RISC V is going to save the world with an open source ISA that is based on an assembler so it does not have to do a couple of compares because the assembler can swap which register are used.

Leave a Reply

featured blogs
May 25, 2023
Register only once to get access to all Cadence on-demand webinars. Unstructured meshing can be automated for much of the mesh generation process, saving significant engineering time and cost. However, controlling numerical errors resulting from the discrete mesh requires ada...
May 24, 2023
Accelerate vision transformer models and convolutional neural networks for AI vision systems with the ARC NPX6 NPU IP, the best processor for edge AI devices. The post Designing Smarter Edge AI Devices with the Award-Winning Synopsys ARC NPX6 NPU IP appeared first on New Hor...
May 8, 2023
If you are planning on traveling to Turkey in the not-so-distant future, then I have a favor to ask....

featured video

Automatically Generate, Budget and Optimize UPF with Synopsys Verdi UPF Architect

Sponsored by Synopsys

Learn to translate a high-level power intent from CSV to a consumable UPF across a typical ASIC design flow using Verdi UPF Architect. Power Architect can focus on the efficiency of the Power Intent instead of worrying about Syntax & UPF Semantics.

Learn more about Synopsys’ Energy-Efficient SoCs Solutions

featured contest

Join the AI Generated Open-Source Silicon Design Challenge

Sponsored by Efabless

Get your AI-generated design manufactured ($9,750 value)! Enter the E-fabless open-source silicon design challenge. Use generative AI to create Verilog from natural language prompts, then implement your design using the Efabless chipIgnite platform - including an SoC template (Caravel) providing rapid chip-level integration, and an open-source RTL-to-GDS digital design flow (OpenLane). The winner gets their design manufactured by eFabless. Hurry, though - deadline is June 2!

Click here to enter!

featured chalk talk

Matter & NXP
Interoperability in our growing Internet of things ecosystem has been a challenge for years. But the new Matter standard is looking to change all of that. It could not only make homes smarter but our design lives easier as well. In this episode of Chalk Talk, Amelia Dalton and Sujata Neidig from NXP examine how Matter will revolutionize IoT by increasing interoperability, simplifying development and providing a comprehensive approach to security and privacy. They also discuss what the roadmap for Matter looks like and how NXP’s Matter reference platforms can help you get started with your next IoT design.
Sep 20, 2022