feature article
Subscribe Now

The Triumph of Inertia

Qualcomm’s Massive 48-core ARM Chip Just Ain’t Gonna Fly

“The fault… is not in our stars, but in ourselves…” – William Shakespeare, “Julius Caesar”

It wasn’t supposed to be like this.

ARM-based processors were supposed to be cheaper, more energy-efficient, cooler running, and maybe even faster than those rusty, old, clapped-out Intel x86 chips. Right?

Right? So what happened? Why do we still inhabit a world powered by Intel servers instead of shiny new ARM machines? What happened to the New Cloud?

We happened. We’re resisting our own advancements. Well, some of us, anyway. Oh, sure, a few early adopters have welcomed our new ARM-powered overlords, but the bulk of the populace still happily acquiesces to another year (decade?) of Intel hegemony. We’re not impressed with our own progress, preferring the cozy comfort of x86 familiarity.

ARM is like the kale and acai berry smoothie of the processor world: light and healthful – but what we really crave is a greasy cheeseburger on the couch when nobody’s looking.  I’ll have mine “animal style,” please.

The buzz this week is about Qualcomm’s newest 48-core mega-processor for servers. It’s an impressive device (especially the foot-wide mockup shown in some of the press photos. At least, I assume it’s a mockup…?) The Centriq 2400 is based on Qualcomm’s own in-house version of ARM’s 64-bit architecture, not one of the standard Cortex-A73 cores or similar designs out of Cambridge. Qualcomm paid big money for the privilege of designing its own ARM implementations, and Centriq is one of the products of that investment.

Centriq is designed for servers, pure and simple. When it eventually goes on sale, it will be competing directly with Intel’s Xeon processors, where it will inhabit server racks, black boxes, and headless systems powering file servers and cloud systems across the globe. At least, that’s the plan.

Centriq itself is remarkable. With up to forty-eight 64-bit cores, it’s a processing beast. It’ll also have integrated storage and network interfaces, something Intel doesn’t like to do. And, it will be manufactured in a fantastically advanced 10nm process, whereas Intel’s Xeon chips are made in 14nm silicon. So, Qualcomm is promising that they will have double the number of CPU cores (Intel’s Xeon E7-8880 has “only” 22 processor cores), use more-advanced fabrication technology, and offer lower power consumption. And it’s all based on a CPU architecture that isn’t 40 years old. What’s not to like? How could this chip not displace Xeon and utterly dominate the server market?

Because it’s not an x86. After that, nothing else matters. It’s not as though server makers are in love with the x86 architecture (is anyone?), but they’re accustomed to it. It’s familiar. It’s reliable. And most of all, it runs all their existing code. So, unless Intel is suddenly revealed to be operating as a secret arm of the North Korean government and its employees start stepping on baby ducks, the company will continue to serve 99 percent of the server market. And reap 90-some percent of the profit therefrom.

But… but… isn’t technical progress supposed to matter? Isn’t our entire industry all about building better mousetraps, creative destruction, and constant improvement? Why can’t an objectively better product make a dent in market share?

Servers were supposed to be the perfect market for exactly this kind of upheaval, this changing of the guard. Unlike PCs, laptops, and phones, servers don’t generally run prepackaged third-party software. In fact, you can get by with just four basic programs, the so-called LAMP set (Linux, Apache, MySQL, and PHP). All four are open-source projects, which means they’re compatible with any processor under the sun. They’re processor-agnostic by design. Heck, the Internet itself is processor-agnostic by design. This whole corner of the industry was supposed to foster competition and churn. Why aren’t we changing server architectures every few weeks? What is wrong with us?

We’re human. We have jobs. Jobs with deadlines. And switching processor architectures is notoriously detrimental to deadline management. Yes, you could make the argument – and many have – that ARM-based server chips will save you money on the purchase price, money on electrical costs, money on cooling, and money on software. CapEx and OpEx both benefit with ARM, so the logic goes. But hardly anybody’s buying.

It’s not for lack of supply. All the cool kids started making ARM-based server chips a few years ago. It was the hot new business opportunity. Now most of those projects are just lukewarm. Calxeda (now defunct), Applied Micro (X-Gene), AMD (the “ambidextrous” Opteron A1100), Cavium (ThunderX2), and Broadcom (Vulcan), have all met with little success. Broadcom just got acquired by Avago, and its Vulcan project has gone strangely quiet. AMD’s latest scheme to step out of Intel’s shadow has been about as successful as its previous campaigns. Google, Facebook, and Amazon keep teasing us with plans to install acres of ARM-based servers powered by wind farms and unicorn smiles, but no rainbow-colored server ranches ever materialize.

The ARM-based server vendors competing with Intel are just now finding out (perhaps too late) what the x86 clone vendors competing with Intel discovered 20 years ago: you can’t build a business by crouching under Intel’s pricing umbrella. Intel’s chips are so expensive, the reasoning goes, that you can design and market a competing processor for a fraction of the cost and still make money. Trouble is, it takes years to develop those chips, while it takes Intel perhaps 20 minutes to cut its prices. If the Santa Clara chipmaker decides you’re a nuisance, it swats you away with a price cut or, if they’re feeling particularly threatened, with an upgrade in fabrication technology. Once you’ve bled to death, prices are readjusted and order is restored. Next victim, please.

And competing with Intel on fabrication technology is never a good idea. Qualcomm says it’s using 10nm silicon for Centriq, versus 14nm for Xeon. That makes it 40% better! Well… yes and no. First off, silicon pitch is just one measure of a fabrication process. Gate density, contact patch, dielectric constants, chemistry, and lots of other arcana all contribute to the performance and power characteristics of any semiconductor process. And the numbers aren’t very accurate anyway. One company’s 10nm process is about equivalent to another company’s 12nm, or even 14nm, process. It’s like comparing Italian restaurants based on their pasta. “This one has 14-inch noodles! It must be better!”

Last time I looked, Intel owned and operated its own fabs, while Qualcomm uses outside foundries such as TSMC or Samsung. That’s fine – nearly all chip companies now outsource their manufacturing – but it’s hardly the basis for a fab-technology faceoff. And Centriq is nearly a year away (the company says it’s sampling chips now), by which time Intel may have upgraded at least some Xeon models to a newer fab process. The 10nm headline sounds great, but methinks it is full of sound and fury, signifying nothing.

One thought on “The Triumph of Inertia”

  1. as Asia has frequently demonstrated to the US, raw materials to finished application level products removes middle men mark-up’s that really manage end user product costs better than dozens of middle man profit margins drastically inflating end user costs.

    Fab-less does control low volume costs, but controlling your means of production from beginning to end is the only way to control high volume costs.

    And it’s the only way to stay in business when one or more of your middle man suppliers leave the market.

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Embedded Storage in Green IoT Applications
Sponsored by Mouser Electronics and Swissbit
In this episode of Chalk Talk, Amelia Dalton and Martin Schreiber from Swissbit explore the unique set of memory requirements that Green IoT designs demand, the roles that endurance, performance and density play in flash memory solutions, and how Swissbit’s SD cards and eMMC technologies can add value to your next IoT design.
Oct 25, 2023
20,210 views