feature article
Subscribe Now

The x86 Moat

Can Intel Defend the Data Center?

Fortresses seldom fall on their own accord. Designed by engineers, they typically have the wherewithal to hold off the anticipated attacks. Historically, the most common cause of fortress demise is the unanticipated – a change in the underlying assumptions. When rifled cannon barrels came into existence, for example, the underlying assumption behind most fortress design was broken. Almost overnight, defenses that had been solid and reliable for decades became almost useless for their intended purpose. After a time, ticket booths were installed and they were transformed into relics – museums and monuments to a bygone era.

Intel’s data center fortress is defended by the x86 moat.

For most of the modern history of the data center, Intel has dominated the market. Sure, the company has formidable semiconductor fabrication technology and some of the world’s foremost experts in processor design. Intel also has the sales, marketing, and support network that owns the accounts and maintains the relationships that give the company’s products an automatic inroad. All of these tactical elements help Intel to defend its position in the market. They work together to be sure that Intel is almost never swimming upstream to win a key deal. They are not sufficient, however, to account for Intel’s consistent continued success in the data center.

The single factor that most locks Intel’s hardware into the sockets that sit on the blades that slip into the racks that line the rows of just about every data center on the planet is the x86 moat. Just about every piece of software in the universe was written and optimized for the x86 architecture. There are billions and billions of lines of code out there working every day that have been debugged and tested and proven to work reliably (well, as reliably as software gets, anyway) on Intel’s architecture. Before any attacker can hope to displace the incumbent supplier, they have to convince the customer that changing processor architectures is really not that big a deal. 

The first and smallest obstacle (which is still formidable) is the recompile. The majority of the software in the universe can be transitioned to a different processor architecture with a simple recompile. Assuming compilers do their jobs correctly, a span can be built relatively easily across the moat. But the entry assumptions to that strategy are challenging. Before one can recompile, one needs the source code. Usually, the source code is in the hands of a third-party (the software vendor). Often, that third-party vendor is not – and has no intentions of – porting (and supporting) their code on an alternative processor architecture that accounts for less than 10% of the market when it already works, sells, and is easy to support on the dominant x86 architecture. 

And, if you’re going to migrate your data center to a different processor type, you actually need not just one piece, but all of your software to be moved over to the new platform. It’s a formidable challenge. If you are operating the type of data center that runs a wide array of different applications, you’re basically locked right there. There is no practical way to migrate everything you run to another type of processor.

Of course, there is always virtualization. Virtualization can allow you to run cross-platform in many cases. But, you have to then ask yourself why? If your motive for moving was to save power or improve performance, you will probably give those up by the overhead of the virtualization layer. If you were trying to save money on the hardware itself – well, shame on you, but you also will offset some of that savings with the cost of the virtualization software itself.

But, what about single-purpose server farms? What about those that run only (or predominantly) software under the control of one company, or are used for a single task? What about the servers operated by the Googles, Facebooks, Bings, Amazons, or YouTubes of the world? These folks have the resources and control to get their software compiled to run on anything they want, they control enormous quantities of servers, and they stand to gain the most by improving the energy efficiency and performance of their server installations.

These are the armies who could span the Intel moat and invade the fortress. But to what end? There would need to be an incumbent who could significantly improve their situation. There are those, for example, who have gone to ARM architectures, presumably for better energy efficiency, but also probably just to keep Intel honest. Competition improves the breed, and an unchallenged incumbent is not motivated to improve things. Throw a little support to a rival and even the most complacent of leaders will crank out a bit of renewed enthusiasm.

But all these things are the expected. And, as we said before, well-designed fortresses are very good at protecting against the expected. For the fortress to be truly at risk – for Intel’s position in the data center to be realistically challenged in a meaningful way – we would need to see a sea change – an event that profoundly alters the nature of the game – a discontinuity.  

FPGA-based acceleration is that discontinuity. 

If the creation of heterogeneous processors with von Neumann machines sharing the workload with FPGA-based accelerators can improve energy efficiency in the data center by orders of magnitude, we have a compelling event worth an incredible amount of money – and trouble. While it might not be worth recompiling your code onto a new architecture to save 5-10% in power, it would most definitely be worth it for a 90%+ power reduction. And, it’s worth more than a recompile – it’s worth a full-blown port. In fact, in some cases it might even be worth deploying a bunch of hardware engineers on your particular algorithm in order to get the maximum benefit from this revolutionary architecture advance. 

But two things have to happen in order to create that opportunity. First, somebody has to build a standardized processor that incorporates FPGA-based accelerators in a way that allows applications to easily take advantage of their potential. Second, somebody has to create a sufficiently simple path for migration of existing software onto the new platform.

Obviously, Intel is not oblivious to this brewing storm. The company just bought Altera for a ridiculous amount of money. Ridiculous, that is, only if you don’t believe that FPGA technology is essential to Intel’s defense of the fortress. If, on the other hand, you believe that the revolution is coming, and that the rebels are carrying FPGAs, Intel just might have bought itself a bargain.

With Altera’s help, Intel appears to be ahead of the insurgents in creating the hardware piece of this puzzle. The company announced back in 2014 that they would build a Xeon processor with a built-in FPGA. Now, that project is assumed to be well down the path toward reality. On the software side, Altera jumped on the OpenCL bandwagon before any other FPGA companies, and it has built a reasonable portfolio of success stories where customers have used the technology to accelerate critical applications. Intel/Altera are bound to make mistakes, suffer false starts, and commit glaring tactical errors. Every large collaborative endeavor includes these things. The question is, will they make any flubs that are serious enough to leave a gap that competitors can exploit? 

Outside the fortress walls, the rest of the world is gathering with their torches. Alliances are being formed that should result in the creation of some formidable competitors to Intel’s next-generation machines. However, none of those efforts appear to yet have a solid strategy for solving the most critical problem – the programming model for legacy software. On the other hand – other than the already-announced strategies that Altera is pursuing (such as OpenCL), neither is Intel.

Right now, the most promising technology for turning legacy software into something that can take advantage of FPGA-based acceleration is in the hands of the EDA industry. And, ironically, the EDA industry is showing no interest in applying their technology for that purpose. Further, the processor companies are making no show of trying to convince them. The only company who has clearly pursued a path toward acquiring and capitalizing on EDA’s technology is Xilinx, whose acquisition of several EDA startups over the years has put them in a key position in this forming battle.

It will be interesting to watch.

Leave a Reply

featured blogs
May 24, 2022
By Melika Roshandell Today's modern electronic designs require ever more functionality and performance to meet consumer demand. These requirements make scaling traditional, flat, 2D-ICs very challenging. With the recent introduction of 3D-ICs into the electronic design indust...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

EdgeQ Creates Big Connections with a Small Chip

Sponsored by Cadence Design Systems

Find out how EdgeQ delivered the world’s first 5G base station on a chip using Cadence’s logic simulation, digital implementation, timing and power signoff, synthesis, and physical verification signoff tools.

Click here for more information

featured paper

Introducing new dynamic features for exterior automotive lights with DLP® technology

Sponsored by Texas Instruments

Exterior lighting, primarily used to illuminate ground areas near the vehicle door, can now be transformed into a projection system used for both vehicle communication and unique styling features. A small lighting module that utilizes automotive-grade digital micromirror devices, such as the DLP2021-Q1 or DLP3021-Q1, can display an endless number of patterns in any color imaginable as well as communicate warnings and alerts to drivers and other vehicles.

Click to read more

featured chalk talk

Flexible Power for a Smart World

Sponsored by Mouser Electronics and CUI Inc.

Safety, EMC compliance, your project schedule, and your BOM cost are all important factors when you are considering what power supply you will need for your next design. You also need to think about form factor, which capacitor will work best, and more. But if you’re not a power supply expert, this can get overwhelming in a hurry. In this episode of Chalk Talk, Amelia Dalton chats with Ron Stull from CUI Inc. about CUI PBO Single Output Board Mount AC-DC Power Supplies, what this ac/dc core brings to the table in terms of form factor, reliability and performance, and why this kind of solution may give you the flexibility you need to optimize your next design.

Click here for more information about CUI Inc PBO Single Output Board Mount AC-DC Power Supplies