feature article
Subscribe Now

The x86 Moat

Can Intel Defend the Data Center?

Fortresses seldom fall on their own accord. Designed by engineers, they typically have the wherewithal to hold off the anticipated attacks. Historically, the most common cause of fortress demise is the unanticipated – a change in the underlying assumptions. When rifled cannon barrels came into existence, for example, the underlying assumption behind most fortress design was broken. Almost overnight, defenses that had been solid and reliable for decades became almost useless for their intended purpose. After a time, ticket booths were installed and they were transformed into relics – museums and monuments to a bygone era.

Intel’s data center fortress is defended by the x86 moat.

For most of the modern history of the data center, Intel has dominated the market. Sure, the company has formidable semiconductor fabrication technology and some of the world’s foremost experts in processor design. Intel also has the sales, marketing, and support network that owns the accounts and maintains the relationships that give the company’s products an automatic inroad. All of these tactical elements help Intel to defend its position in the market. They work together to be sure that Intel is almost never swimming upstream to win a key deal. They are not sufficient, however, to account for Intel’s consistent continued success in the data center.

The single factor that most locks Intel’s hardware into the sockets that sit on the blades that slip into the racks that line the rows of just about every data center on the planet is the x86 moat. Just about every piece of software in the universe was written and optimized for the x86 architecture. There are billions and billions of lines of code out there working every day that have been debugged and tested and proven to work reliably (well, as reliably as software gets, anyway) on Intel’s architecture. Before any attacker can hope to displace the incumbent supplier, they have to convince the customer that changing processor architectures is really not that big a deal. 

The first and smallest obstacle (which is still formidable) is the recompile. The majority of the software in the universe can be transitioned to a different processor architecture with a simple recompile. Assuming compilers do their jobs correctly, a span can be built relatively easily across the moat. But the entry assumptions to that strategy are challenging. Before one can recompile, one needs the source code. Usually, the source code is in the hands of a third-party (the software vendor). Often, that third-party vendor is not – and has no intentions of – porting (and supporting) their code on an alternative processor architecture that accounts for less than 10% of the market when it already works, sells, and is easy to support on the dominant x86 architecture. 

And, if you’re going to migrate your data center to a different processor type, you actually need not just one piece, but all of your software to be moved over to the new platform. It’s a formidable challenge. If you are operating the type of data center that runs a wide array of different applications, you’re basically locked right there. There is no practical way to migrate everything you run to another type of processor.

Of course, there is always virtualization. Virtualization can allow you to run cross-platform in many cases. But, you have to then ask yourself why? If your motive for moving was to save power or improve performance, you will probably give those up by the overhead of the virtualization layer. If you were trying to save money on the hardware itself – well, shame on you, but you also will offset some of that savings with the cost of the virtualization software itself.

But, what about single-purpose server farms? What about those that run only (or predominantly) software under the control of one company, or are used for a single task? What about the servers operated by the Googles, Facebooks, Bings, Amazons, or YouTubes of the world? These folks have the resources and control to get their software compiled to run on anything they want, they control enormous quantities of servers, and they stand to gain the most by improving the energy efficiency and performance of their server installations.

These are the armies who could span the Intel moat and invade the fortress. But to what end? There would need to be an incumbent who could significantly improve their situation. There are those, for example, who have gone to ARM architectures, presumably for better energy efficiency, but also probably just to keep Intel honest. Competition improves the breed, and an unchallenged incumbent is not motivated to improve things. Throw a little support to a rival and even the most complacent of leaders will crank out a bit of renewed enthusiasm.

But all these things are the expected. And, as we said before, well-designed fortresses are very good at protecting against the expected. For the fortress to be truly at risk – for Intel’s position in the data center to be realistically challenged in a meaningful way – we would need to see a sea change – an event that profoundly alters the nature of the game – a discontinuity.  

FPGA-based acceleration is that discontinuity. 

If the creation of heterogeneous processors with von Neumann machines sharing the workload with FPGA-based accelerators can improve energy efficiency in the data center by orders of magnitude, we have a compelling event worth an incredible amount of money – and trouble. While it might not be worth recompiling your code onto a new architecture to save 5-10% in power, it would most definitely be worth it for a 90%+ power reduction. And, it’s worth more than a recompile – it’s worth a full-blown port. In fact, in some cases it might even be worth deploying a bunch of hardware engineers on your particular algorithm in order to get the maximum benefit from this revolutionary architecture advance. 

But two things have to happen in order to create that opportunity. First, somebody has to build a standardized processor that incorporates FPGA-based accelerators in a way that allows applications to easily take advantage of their potential. Second, somebody has to create a sufficiently simple path for migration of existing software onto the new platform.

Obviously, Intel is not oblivious to this brewing storm. The company just bought Altera for a ridiculous amount of money. Ridiculous, that is, only if you don’t believe that FPGA technology is essential to Intel’s defense of the fortress. If, on the other hand, you believe that the revolution is coming, and that the rebels are carrying FPGAs, Intel just might have bought itself a bargain.

With Altera’s help, Intel appears to be ahead of the insurgents in creating the hardware piece of this puzzle. The company announced back in 2014 that they would build a Xeon processor with a built-in FPGA. Now, that project is assumed to be well down the path toward reality. On the software side, Altera jumped on the OpenCL bandwagon before any other FPGA companies, and it has built a reasonable portfolio of success stories where customers have used the technology to accelerate critical applications. Intel/Altera are bound to make mistakes, suffer false starts, and commit glaring tactical errors. Every large collaborative endeavor includes these things. The question is, will they make any flubs that are serious enough to leave a gap that competitors can exploit? 

Outside the fortress walls, the rest of the world is gathering with their torches. Alliances are being formed that should result in the creation of some formidable competitors to Intel’s next-generation machines. However, none of those efforts appear to yet have a solid strategy for solving the most critical problem – the programming model for legacy software. On the other hand – other than the already-announced strategies that Altera is pursuing (such as OpenCL), neither is Intel.

Right now, the most promising technology for turning legacy software into something that can take advantage of FPGA-based acceleration is in the hands of the EDA industry. And, ironically, the EDA industry is showing no interest in applying their technology for that purpose. Further, the processor companies are making no show of trying to convince them. The only company who has clearly pursued a path toward acquiring and capitalizing on EDA’s technology is Xilinx, whose acquisition of several EDA startups over the years has put them in a key position in this forming battle.

It will be interesting to watch.

Leave a Reply

featured blogs
Nov 24, 2021
The need for automatic mesh generation has never been clearer. The CFD Vision 2030 Study called most applied CFD 'onerous' and cited meshing's inability to generate complex meshes on the first... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Nov 24, 2021
I just saw an amazing video showing Mick Jagger and the Rolling Stones from 2021 mixed with Spot robot dogs from Boston Dynamics....
Nov 23, 2021
We explain clock domain crossing & common challenges faced during the ASIC design flow as chip designers scale up CDC verification for multi-billion-gate ASICs. The post Integration Challenges for Multi-Billion-Gate ASICs: Part 1 – Clock Domain Crossing appeared f...
Nov 8, 2021
Intel® FPGA Technology Day (IFTD) is a free four-day event that will be hosted virtually across the globe in North America, China, Japan, EMEA, and Asia Pacific from December 6-9, 2021. The theme of IFTD 2021 is 'Accelerating a Smart and Connected World.' This virtual event ...

featured video

Design Low-Energy Audio/Voice Capability for Hearables, Wearables & Always-On Devices

Sponsored by Cadence Design Systems

Designing an always-on system that needs to conserve battery life? Need to also include hands-free voice control for your users? Watch this video to learn how you can reduce the energy consumption of devices with small batteries and provide a solution for a greener world with the Cadence® Tensilica® HiFi 1 DSP family.

More information about Cadence® Tensilica® HiFi 1 DSP family

featured paper

4 questions to ask before choosing a Wi-SUN stack

Sponsored by Texas Instruments

Scalability, reliability, security, and speed—these are the advantages that the Wireless Smart Ubiquitous Network (Wi-SUN®) offers to smart cities and the Internet of Things. But as a developer, how can you maximize these advantages in your software design? In this article, TI addresses four questions to help you save development cost and get to market faster with a more streamlined design cycle for your IoT application.

Click to read more

featured chalk talk

Vibration Sensing with LoRaWAN

Sponsored by Mouser Electronics and Advantech

Vibration sensing is an integral part of today’s connected industrial designs but Bluetooth, WiFi, and Zigbee may not be the best solution for this kind of sensing in an industrial setting. In this episode of Chalk Talk, Amelia Dalton chats with Andrew Lund about LoRaWAN for vibration sensing. They investigate why LoRaWAN is perfect for industrial vibration sensing, the role that standards play in these kinds of systems, and why 3 axis detection and real-time monitoring can make all the difference in your next industrial design.

Click here for more information about Advantech WISE-2410 LoRaWAN Wireless Sensor