“This is the way the world ends / Not with a bang but a whimper.” Those last lines from T. S. Eliot’s The Hollow Men were written about war, but could just as easily apply to one of the biggest failures in electronics engineering.
Actually, I shouldn’t say the project failed. The engineering part was actually pretty successful. It was a commercial failure. “The operation was a success, although the patient died,” goes the old surgeon’s joke. And so it is with Itanium.
Fans of heavy metal will know Itanium as Intel’s biggest, loudest, and most electrifying microprocessor. But like vinyl LPs of Dokken, they’re not selling.
The whimper came a few weeks ago, when Intel posted a minor two-sentence update on its Web site. In it, the company quietly says it’s “updating the definition” of the next-generation Itanium chip, codenamed Kittson. The “update” is that Kittson won’t be manufactured in 22nm silicon as originally planned; it will be a 32nm chip. While that 10nm difference may not seem like much, it’s a huge change in Itanium-land, where Kittson was supposed to be a faster upgrade from the current chips, mostly by virtue of that 22nm silicon. Now that Kittson will be made using the same process as Poulson (the current chip), there’s not much reason to upgrade. In large part, the whole point of Kittson was to move Poulson to 22nm production line. Without that, what’s the point?
It gets worse. Intel’s second sentence says that Itanium will “converge” on the same socket and motherboard design as the company’s x86-based Xeon processors. What’s this? Itanium and Xeon to use the same socket? The only reason Intel (or anyone else) would do that would be to make it easier to switch from one chip to the other. And you get no prizes for guessing the winner and the loser in that swap.
So, in a nutshell, Intel is quietly notifying the world that its top-of-the-line product won’t be getting much faster next year, and that it’ll start sharing a socket with its older sibling. Sounds like Mom & Dad have moved a spare cot into Junior’s room and started painting the second bedroom. So long, Itanium, it was nice raising you.
What a huge letdown for the engineers working on Itanium. Although it’s not as if they didn’t see it coming. Itanium was never the hot seller Intel wanted it to be, and more than a decade of improvements never made it any more attractive.
Itanium was years in development, and it engaged the best minds in the engineering business. It was a joint venture between Hewlett Packard and Intel, and both companies put some of their best people on it. For many of those developers, Itanium was going to be the highlight of their career. Imagine being asked to join Intel’s next-generation microprocessor team back in 1994. The follow-on to the hugely successful Pentium franchise! The next ultra-RISC processor for HP workstations and big servers! That must have sounded like a dream job.
Okay, so there were some delays in the project. What major breakthrough development effort doesn’t have the occasional schedule slip? This is difficult stuff. We’re redefining the computer world here. Itanium was a few months late, then a few years late. Meanwhile, old-school x86 processors got faster and faster. When the first Itanium chips finally hit the street, Intel and HP were already apologizing for them, and that’s never a good sign. It’s like calling your own baby ugly. Somehow they knew.
It wasn’t supposed to be this way. Itanium was based on the latest thinking in computer science. It was beyond RISC; it was EPIC (explicitly parallel instruction-set computing). Hewlett Packard was no slouch at computer design, after all, and Intel knew a thing or two about making fast chips. And the whole world knew—just knew—that x86 was heading down the tubes. No way could Intel and AMD (and the various surviving clone makers) keep the wheezing old x86 on life support much longer. Itanium was the ultimate hammer blow that would establish Intel’s dominance atop the microprocessor heap and HP’s atop the market for big iron. They couldn’t lose.
Except they did. The first Itanium chips were late (which was no big surprise) and not very fast (which was a big surprise). “Just wait for Itanium 2,” Intel said. So everyone did. And it still wasn’t what people wanted. Because it was too new.
Itanium was designed to run all-new software, written to take advantage of its radical new EPIC architecture. But what people actually had was old x86 code, dragged through generations of Pentium upgrades. Taking advantage of Itanium meant porting all that software, and that’s too much work for a 10% performance boost. If Itanium had been twice as fast as contemporary x86 chips, or even 50% faster, developers might have taken the bait. But to redesign an all-new hardware platform and write or port all-new code required a leap of faith in Itanium that few who weren’t on the payroll were willing to make. HP dutifully produced Itanium-based systems (and still does), but outside of HP’s sphere of influence, Itanium never made a dent. The world’s most ambitious and expensive chip was a miserable failure. All because of ratty old x86 code.
Now Intel is tacitly admitting defeat by suggesting that future Itania will have x86-style sockets. Build your Itanium system today and we’ll have a nice reliable Xeon chip for it tomorrow. Without that crossover option, Itanium-based systems would probably die off even faster than they already are. At least this way Intel has a chance of preserving some of its customer base.
Microprocessor design is a funny thing. You can throw all the talent and transistors at it that you want, but it still comes down to the invisible, intangible code that people want to run on it. It’s like telling jokes, writing books, or publishing an engineering journal: all the effort is wasted if no one speaks your language. Itanium wound up being the Vogon poetry of microprocessors – the third-worst in the universe.