feature article
Subscribe Now

ARM Servers “Cloud” the Issue

From Little Acorns Do Mighty Oak Trees Grow

Gather ’round, children, for the conclusion of our story on how ARM and x86 battled it out for global supremacy. Wielding PCs and servers as their weapons, these two titans—one large and mighty, the other small and nimble—thrust and parried as the populace quaked in fear. Who would emerge triumphant? Who would retreat in shame? And who will program these damn things?

Two weeks ago, we looked at how ARM chips probably won’t replace the venerable x86 in PCs, mostly because doing so would upend the very definition of a PC. If it don’t run Windows and it don’t run old copies of Flight Simulator, it ain’t a PC. An ARM-based machine might make a perfectly good “personal computer” in the generic sense, but that’s not the same thing.

But servers? Ah, that’s a different story entirely.

At first blush, ARM processors would seem to be singularly ill-suited for servers. After all, servers are big, beefy, he-man computers with lots of disk storage, tons of RAM, and exotic go-fast processors. ARM chips are wimpy, low-powered, and meek, better suited to iPhones and pastel-colored consumer items. How could you possibly make a decent server based on ARM chips?

Turns out you can, and the secret once again lies in the software. Except this time the situation is reversed. The enormous pile of software that keeps PC makers locked to the x86 doesn’t really exist for servers. Server systems (and here we’re talking both Web servers like Amazon’s and file/print servers like you may have in your office) don’t have the same back catalog of third-party applications. Instead, servers tend to run just a few important programs, and most of those are open-source code.

Indeed, the major ingredients for a good server are the so-called LAMP set: Linux, Apache, MySQL, and PHP. If you’ve got those four you’ve got yourself a perfectly good server. And guess what? All four are open-source, and all four have been ported to ARM processors (and just about every other CPU architecture).

An important characteristic of servers is that they’re accessed remotely. You don’t sit in front of a server they way you sit in front of a PC. In fact, that’s pretty much the definition of a server: a network-attached resource that you access purely through remote procedure calls. That means the user interface isn’t important, just the APIs. And since Web and Internet protocols are standardized (more or less), it doesn’t matter what kind of hardware or software is on the other side. It’s the classic Turing test: you can’t tell what’s on the other side of the curtain. As long as the APIs work, you could be talking to a machine powered by steam, a wound-up rubber band, or a person typing really fast at a keyboard.

That abstraction layer is deliberate. The Internet is supposed to be hardware independent. It’s not supposed to rely on any one operating system, CPU architecture, or network structure. So the server market has typically been more varied and competitive than the market for PCs (or cell phones, or video games, etc.), with PowerPC, MIPS, SPARC, and other CPU families all represented. Servers are one of the few areas where computer makers really do compete on price and performance.

So where does ARM fit in all this? Aren’t the British company’s CPU designs a little, shall we say, underpowered for this type of work? Not if you use enough of them. That’s because another interesting characteristic of servers is that they lend themselves to multiprocessor designs. Servers are almost always servicing multiple unrelated requests at once, so they’re textbook examples of multicore, multithreading, and/or multiprocessing systems. A swarm of ARM processors does a mighty fine job of juggling multiple incoming requests. When they’re not busy, the processors shut down or drop into low-power mode. They’re rarely called on to do floating-point or graphics work, and they don’t need to take on big computing tasks; just lots of small ones. In short, a server would seem to be the ideal use case for clusters of relatively simple processors.

But it’s power efficiency where this really pays off. Big server farms (think Amazon, Google, eBay, and so on) easily blow more money on air conditioning than they spend on computer hardware. Dissipating heat is a big deal, so any added thermal efficiency pays off for them in the long run. And efficiency is where ARM made its name, so it’s the obvious first choice. The fact that multiple chip vendors all compete to produce ARM chips just sweetens the deal. Server makers can choose from among different ARM-based chips—or x86-based chips—when designing their next server. It looks like some of them are leaning toward ARM.

There’s no reason that MIPS, SPARC, PowerPC, or any other CPU family can’t compete in this space, too. In fact, they do. Servers are CPU-independent, after all, so as long as you’ve got the LAMP code you’ve got the basics for a dedicated server. Oracle certainly likes to use SPARC processors, IBM favors PowerPC, and Hewlett Packard prefers Itanium (in which it invested many millions of dollars). To anyone but a hardware engineer, these boxes are more or less interchangeable.

But don’t server chips need to be 64-bit designs? If so, that would exclude ARM. The fact is, servers need only big addressing, not big arithmetic. They need to access a lot of memory; they don’t usually need to do complex math. For most CPU companies, that means 64-bit addressing and 64-bit data. In ARM’s case, the top-of-the-line Cortex-A15 has 40-bit addressing, which is enough to access 1TB of memory. That’s sufficient for today’s servers, even though it doesn’t have quite the sex appeal of the 64-bit label. That’ll probably come in a year or so. ARM has never been out front in the CPU-performance race, preferring to lag a couple of years behind its rivals. Based on the company’s position in the marketplace, however, I’d say that strategy has served it well. 

Leave a Reply

featured blogs
Jun 22, 2018
A myriad of mechanical and electrical specifications must be considered when selecting the best connector system for your design. An incomplete, first-pass list of considerations include the type of termination, available footprint space, processing and operating temperature...
Jun 22, 2018
You can't finish the board before the schematic, but you want it done pretty much right away, before marketing changes their minds again!...
Jun 22, 2018
Last time I worked for Cadence in the early 2000s, Adriaan Ligtenberg ran methodology services and, in particular, something we called Virtual CAD. The idea of Virtual CAD was to allow companies to outsource their CAD group to Cadence. In effect, we would be the CAD group for...
Jun 7, 2018
If integrating an embedded FPGA (eFPGA) into your ASIC or SoC design strikes you as odd, it shouldn'€™t. ICs have been absorbing almost every component on a circuit board for decades, starting with transistors, resistors, and capacitors '€” then progressing to gates, ALUs...
May 24, 2018
Amazon has apparently had an Echo hiccup of the sort that would give customers bad dreams. It sent a random conversation to a random contact. A couple had installed numerous Alexa-enabled devices in the home. At some point, they had a conversation '€“ as couples are wont to...