feature article
Subscribe Now

ARM Servers “Cloud” the Issue

From Little Acorns Do Mighty Oak Trees Grow

Gather ’round, children, for the conclusion of our story on how ARM and x86 battled it out for global supremacy. Wielding PCs and servers as their weapons, these two titans—one large and mighty, the other small and nimble—thrust and parried as the populace quaked in fear. Who would emerge triumphant? Who would retreat in shame? And who will program these damn things?

Two weeks ago, we looked at how ARM chips probably won’t replace the venerable x86 in PCs, mostly because doing so would upend the very definition of a PC. If it don’t run Windows and it don’t run old copies of Flight Simulator, it ain’t a PC. An ARM-based machine might make a perfectly good “personal computer” in the generic sense, but that’s not the same thing.

But servers? Ah, that’s a different story entirely.

At first blush, ARM processors would seem to be singularly ill-suited for servers. After all, servers are big, beefy, he-man computers with lots of disk storage, tons of RAM, and exotic go-fast processors. ARM chips are wimpy, low-powered, and meek, better suited to iPhones and pastel-colored consumer items. How could you possibly make a decent server based on ARM chips?

Turns out you can, and the secret once again lies in the software. Except this time the situation is reversed. The enormous pile of software that keeps PC makers locked to the x86 doesn’t really exist for servers. Server systems (and here we’re talking both Web servers like Amazon’s and file/print servers like you may have in your office) don’t have the same back catalog of third-party applications. Instead, servers tend to run just a few important programs, and most of those are open-source code.

Indeed, the major ingredients for a good server are the so-called LAMP set: Linux, Apache, MySQL, and PHP. If you’ve got those four you’ve got yourself a perfectly good server. And guess what? All four are open-source, and all four have been ported to ARM processors (and just about every other CPU architecture).

An important characteristic of servers is that they’re accessed remotely. You don’t sit in front of a server they way you sit in front of a PC. In fact, that’s pretty much the definition of a server: a network-attached resource that you access purely through remote procedure calls. That means the user interface isn’t important, just the APIs. And since Web and Internet protocols are standardized (more or less), it doesn’t matter what kind of hardware or software is on the other side. It’s the classic Turing test: you can’t tell what’s on the other side of the curtain. As long as the APIs work, you could be talking to a machine powered by steam, a wound-up rubber band, or a person typing really fast at a keyboard.

That abstraction layer is deliberate. The Internet is supposed to be hardware independent. It’s not supposed to rely on any one operating system, CPU architecture, or network structure. So the server market has typically been more varied and competitive than the market for PCs (or cell phones, or video games, etc.), with PowerPC, MIPS, SPARC, and other CPU families all represented. Servers are one of the few areas where computer makers really do compete on price and performance.

So where does ARM fit in all this? Aren’t the British company’s CPU designs a little, shall we say, underpowered for this type of work? Not if you use enough of them. That’s because another interesting characteristic of servers is that they lend themselves to multiprocessor designs. Servers are almost always servicing multiple unrelated requests at once, so they’re textbook examples of multicore, multithreading, and/or multiprocessing systems. A swarm of ARM processors does a mighty fine job of juggling multiple incoming requests. When they’re not busy, the processors shut down or drop into low-power mode. They’re rarely called on to do floating-point or graphics work, and they don’t need to take on big computing tasks; just lots of small ones. In short, a server would seem to be the ideal use case for clusters of relatively simple processors.

But it’s power efficiency where this really pays off. Big server farms (think Amazon, Google, eBay, and so on) easily blow more money on air conditioning than they spend on computer hardware. Dissipating heat is a big deal, so any added thermal efficiency pays off for them in the long run. And efficiency is where ARM made its name, so it’s the obvious first choice. The fact that multiple chip vendors all compete to produce ARM chips just sweetens the deal. Server makers can choose from among different ARM-based chips—or x86-based chips—when designing their next server. It looks like some of them are leaning toward ARM.

There’s no reason that MIPS, SPARC, PowerPC, or any other CPU family can’t compete in this space, too. In fact, they do. Servers are CPU-independent, after all, so as long as you’ve got the LAMP code you’ve got the basics for a dedicated server. Oracle certainly likes to use SPARC processors, IBM favors PowerPC, and Hewlett Packard prefers Itanium (in which it invested many millions of dollars). To anyone but a hardware engineer, these boxes are more or less interchangeable.

But don’t server chips need to be 64-bit designs? If so, that would exclude ARM. The fact is, servers need only big addressing, not big arithmetic. They need to access a lot of memory; they don’t usually need to do complex math. For most CPU companies, that means 64-bit addressing and 64-bit data. In ARM’s case, the top-of-the-line Cortex-A15 has 40-bit addressing, which is enough to access 1TB of memory. That’s sufficient for today’s servers, even though it doesn’t have quite the sex appeal of the 64-bit label. That’ll probably come in a year or so. ARM has never been out front in the CPU-performance race, preferring to lag a couple of years behind its rivals. Based on the company’s position in the marketplace, however, I’d say that strategy has served it well. 

Leave a Reply

featured blogs
Apr 16, 2024
In today's semiconductor era, every minute, you always look for the opportunity to enhance your skills and learning growth and want to keep up to date with the technology. This could mean you would also like to get hold of the small concepts behind the complex chip desig...
Apr 11, 2024
See how Achronix used our physical verification tools to accelerate the SoC design and verification flow, boosting chip design productivity w/ cloud-based EDA.The post Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud appeared ...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Connectivity Solutions for Smart Trailers
Smart trailers can now be equipped with a wide variety of interconnection systems including wire-to-wire, wire-to-board, and high-speed data solutions. In this episode of Chalk Talk, Amelia Dalton and Blaine Dudley from TE Connectivity explore the evolution of smart trailer technology, the different applications within a trailer where connectivity would be valuable, and how TE Connectivity is encouraging innovation in the world of smart trailer technology.
Oct 6, 2023
24,814 views