feature article
Subscribe Now

ARM Servers “Cloud” the Issue

From Little Acorns Do Mighty Oak Trees Grow

Gather ’round, children, for the conclusion of our story on how ARM and x86 battled it out for global supremacy. Wielding PCs and servers as their weapons, these two titans—one large and mighty, the other small and nimble—thrust and parried as the populace quaked in fear. Who would emerge triumphant? Who would retreat in shame? And who will program these damn things?

Two weeks ago, we looked at how ARM chips probably won’t replace the venerable x86 in PCs, mostly because doing so would upend the very definition of a PC. If it don’t run Windows and it don’t run old copies of Flight Simulator, it ain’t a PC. An ARM-based machine might make a perfectly good “personal computer” in the generic sense, but that’s not the same thing.

But servers? Ah, that’s a different story entirely.

At first blush, ARM processors would seem to be singularly ill-suited for servers. After all, servers are big, beefy, he-man computers with lots of disk storage, tons of RAM, and exotic go-fast processors. ARM chips are wimpy, low-powered, and meek, better suited to iPhones and pastel-colored consumer items. How could you possibly make a decent server based on ARM chips?

Turns out you can, and the secret once again lies in the software. Except this time the situation is reversed. The enormous pile of software that keeps PC makers locked to the x86 doesn’t really exist for servers. Server systems (and here we’re talking both Web servers like Amazon’s and file/print servers like you may have in your office) don’t have the same back catalog of third-party applications. Instead, servers tend to run just a few important programs, and most of those are open-source code.

Indeed, the major ingredients for a good server are the so-called LAMP set: Linux, Apache, MySQL, and PHP. If you’ve got those four you’ve got yourself a perfectly good server. And guess what? All four are open-source, and all four have been ported to ARM processors (and just about every other CPU architecture).

An important characteristic of servers is that they’re accessed remotely. You don’t sit in front of a server they way you sit in front of a PC. In fact, that’s pretty much the definition of a server: a network-attached resource that you access purely through remote procedure calls. That means the user interface isn’t important, just the APIs. And since Web and Internet protocols are standardized (more or less), it doesn’t matter what kind of hardware or software is on the other side. It’s the classic Turing test: you can’t tell what’s on the other side of the curtain. As long as the APIs work, you could be talking to a machine powered by steam, a wound-up rubber band, or a person typing really fast at a keyboard.

That abstraction layer is deliberate. The Internet is supposed to be hardware independent. It’s not supposed to rely on any one operating system, CPU architecture, or network structure. So the server market has typically been more varied and competitive than the market for PCs (or cell phones, or video games, etc.), with PowerPC, MIPS, SPARC, and other CPU families all represented. Servers are one of the few areas where computer makers really do compete on price and performance.

So where does ARM fit in all this? Aren’t the British company’s CPU designs a little, shall we say, underpowered for this type of work? Not if you use enough of them. That’s because another interesting characteristic of servers is that they lend themselves to multiprocessor designs. Servers are almost always servicing multiple unrelated requests at once, so they’re textbook examples of multicore, multithreading, and/or multiprocessing systems. A swarm of ARM processors does a mighty fine job of juggling multiple incoming requests. When they’re not busy, the processors shut down or drop into low-power mode. They’re rarely called on to do floating-point or graphics work, and they don’t need to take on big computing tasks; just lots of small ones. In short, a server would seem to be the ideal use case for clusters of relatively simple processors.

But it’s power efficiency where this really pays off. Big server farms (think Amazon, Google, eBay, and so on) easily blow more money on air conditioning than they spend on computer hardware. Dissipating heat is a big deal, so any added thermal efficiency pays off for them in the long run. And efficiency is where ARM made its name, so it’s the obvious first choice. The fact that multiple chip vendors all compete to produce ARM chips just sweetens the deal. Server makers can choose from among different ARM-based chips—or x86-based chips—when designing their next server. It looks like some of them are leaning toward ARM.

There’s no reason that MIPS, SPARC, PowerPC, or any other CPU family can’t compete in this space, too. In fact, they do. Servers are CPU-independent, after all, so as long as you’ve got the LAMP code you’ve got the basics for a dedicated server. Oracle certainly likes to use SPARC processors, IBM favors PowerPC, and Hewlett Packard prefers Itanium (in which it invested many millions of dollars). To anyone but a hardware engineer, these boxes are more or less interchangeable.

But don’t server chips need to be 64-bit designs? If so, that would exclude ARM. The fact is, servers need only big addressing, not big arithmetic. They need to access a lot of memory; they don’t usually need to do complex math. For most CPU companies, that means 64-bit addressing and 64-bit data. In ARM’s case, the top-of-the-line Cortex-A15 has 40-bit addressing, which is enough to access 1TB of memory. That’s sufficient for today’s servers, even though it doesn’t have quite the sex appeal of the 64-bit label. That’ll probably come in a year or so. ARM has never been out front in the CPU-performance race, preferring to lag a couple of years behind its rivals. Based on the company’s position in the marketplace, however, I’d say that strategy has served it well. 

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

Featured video

Synopsys and Intel Full System PCIe 5.0 Interoperability Success

Sponsored by Synopsys

This video demonstrates industry's first successful system-level PCI Express (PCIe) 5.0 interoperability between the Synopsys DesignWare Controller and PHY IP for PCIe 5.0 and Intel Xeon Scalable processor (codename Sapphire Rapids). The ecosystem can use the companies' proven solutions to accelerate development of their PCIe 5.0-based products in high-performance computing and AI applications.

More information about DesignWare IP Solutions for PCI Express

featured paper

Top 9 design questions about digital isolators

Sponsored by Texas Instruments

Looking for more information about digital isolators? We’re here to help. Based on TI E2E™ support forum feedback, we compiled a list of the most frequently asked questions about digital isolator design challenges. This article covers questions such as, “What is the logic state of a digital isolator with no input signal?”, and “Can you leave unused channel pins on a digital isolator floating?”

Click here to download the whitepaper

Featured Chalk Talk

Introducing Google Coral

Sponsored by Mouser Electronics and Google

AI inference at the edge is exploding right now. Numerous designs that can’t use cloud processing for AI tasks need high-performance, low-power AI acceleration right in their embedded designs. Wouldn’t it be cool if those designs could have their own little Google TPU? In this episode of Chalk Talk, Amelia Dalton chats with James McKurkin of Google about the Google Coral edge TPU.

More information about Coral System on Module