feature article
Subscribe Now

Chaos in the Cloud

The Battle for Data Center Supremacy

Storm clouds are building on the horizon. Massive thunderheads darken the sky, their convective currents creating blinding static outbursts as enormous caches of charge follow the shortest path to ground. Change is coming to computing and it’s not going to be smooth or evolutionary. With the IoT driving the cloud and the cloud defining datacenter computing, we are about to be in the midst of perhaps the greatest discontinuous change in the history of computation. 

With predictions that the IoT will deploy a trillion sensors worldwide within the next few years, and with billions of connected devices already in the hands of the majority of the civilized world, we are on the brink of an incredible revolution in technology. From self-driving cars to big-data-bolstered healthcare, we will soon see technological feats that even science fiction failed to predict.

At the heart of this revolution is a rapidly changing global computing infrastructure that will process, filter, coalesce, and cross-correlate staggering amounts of data from trillions of independent sources, giving us the information and (eventually) the intelligence to benefit humanity in ways we have barely dreamed of. But what will this global computing infrastructure look like? To paraphrase Douglas Adams’s “Deep Thought”: “I speak of none but the computer that is to come… whose merest operational parameters I am not worthy to calculate…” and yet, we will now try to explain it to you.

Almost the entire electronics industry is in some way now engaged in designing parts of the largest heterogeneous distributed computing engine ever imagined. From fast-paced startups chasing the wearable sensor wealth to old-guard heavy-iron rack builders deploying clouds-in-a-box, just about every aspect of the technology infrastructure is contributing. Starting at the sensors, we can follow the data from the ultra-low-power MCUs in the endpoint through the maze of IoT branches to the backbone of the Internet and ultimately to massive server farms doing cloud-based computing and storage. At every node in this massive web, the key drivers are power and performance. We need maximum throughput with minimum latency on a tiny power budget.

Interestingly, at just about every juncture, programmable logic and FPGA technology play a starring role. Let’s start at the sensor, where massive amounts of data must be gathered and filtered. Many sensors are “always on” but produce interesting data only intermittently. Behind the sensor we need ultra-low-power monitors that watch and wait while the rest of the system sleeps. This standby watcher function is often performed by ultra-low-power programmable logic devices that can sip microwatts while keeping a watchful eye on the sensors. When something interesting does happen, these programmable logic devices can kick into “sensor fusion” gear, aggregating and filtering the input from multiple sensors and trying to derive “context.” What does this set of inputs mean? Perhaps they tell us our customer is now out for a run. That simple piece of information can now be passed up the line, rather than an enormous mass of raw sensor data.

When massive amounts of data do get passed up the line, it is often wirelessly. FPGAs sit very close to the antennae on wireless base stations, performing high-speed signal processing on the antenna output before it is passed to the next stage. Then, FPGAs push packets through multi-gigabit pipes as the data makes its way toward the backbone of the Internet. Once in the backbone, it hits the traditional sweet spot for programmable logic. FPGAs have been the go-to technology for packet switching for the past two decades. 

When those packets arrive at the datacenter, FPGAs are on the job again, gathering gobs of incoming data and helping to distribute it to the appropriate racks of servers. In fact, just about every time there is a giant data pipe, there are FPGAs at both ends pushing and pulling the data and routing it off into smaller pipes.

It is at this point that the structure of the datacenter becomes much more application specific. Different server architectures work best for different problems. The optimal proximity of processor, accelerator, memory, and storage, and the types of connections used between them are determined by the task at hand. All clouds may look alike from the outside, but they are actually arranged differently depending on what tasks they are supposed to be performing.

Once inside the individual server blade, we hit the point where FPGAs are making their move into the last bastion of conventional processors. In these pages, we have discussed at length Intel’s strategy to create heterogeneous computers that include both conventional processors and FPGA fabric in the same package, and probably even on the same chip. FPGAs can accelerate selected computing tasks enormously while reducing overall power consumption, so merging processors with FPGAs will be an improvement of epic proportions. Once the ecosystem is in place, these heterogeneous processors will revolutionize the datacenter in terms of performance-per-watt.

Of course, Intel isn’t the only company pursuing FPGAs in datacenter computing. While their dominant market share in datacenter processors and their pending acquisition of Altera certainly give them an enviable position in the race, they are by no means the only viable force in the fight. Xilinx and Qualcomm recently announced a collaboration to deliver a heterogeneous computing solution to the datacenter that is similar in many ways to the Intel/Altera collaboration. In fact, the Xilinx/Qualcomm announcement fueled increased levels of speculation that Xilinx might be an acquisition target for a company such as Qualcomm. But, regardless of whether it’s through collaboration or acquisition, it is clear that competing solutions are on a collision course in wooing the architects of the cloud-based computers of tomorrow. 

That race is a competition with many dimensions. We have the basic semiconductor fabrication technology – with Intel going up against TSMC. We have the advanced multi-chip packaging technologies from both sides. We have Altera’s FPGA fabric architecture competing with that of Xilinx. We have Intel’s processor architecture squaring off against an insurgency from ARM. We have various competing high-speed memory architectures including hybrid memory cube (HMC) versus high-bandwidth memory (HBM) and others. We have a wide variety of mass storage technologies claiming supremacy. And we have Qualcomm’s server platforms against a plethora of incumbents, each with their own strengths and differentiators.

If the new reality is like the present, there will be only one winner. Today, Intel dominates the server market. But the primary reason for that dominance is the strength, legacy, and backward-compatibility of Intel’s X86 architecture. Zillions of lines of datacenter code have been optimized for those processors, and that fact alone has been enough of a moat to fend off even the most serious challengers.

But, with the next wave of computing, instruction set compatibility may not be the sticky superpower it once was. If acceleration with FPGAs is the magic bullet, the engineering effort to optimize an algorithm for FPGA-based acceleration will likely be dramatically more than would be required to, say, port software to a different instruction set. And, if the FPGA component is the new “stickiness,” the winning overall system may be the one with the winning FPGA. Finally, the one with the winning FPGA may be the one with the most robust set of software tools for getting applications to take advantage of the incredible power of heterogeneous computing with the least amount of engineering effort and expertise required.

So, it may all boil down to this: The distributed cloud-computing architecture of the future could be determined by which FPGA company has the best software tools.

That’s a high-stakes battle that would be interesting to watch.

2 thoughts on “Chaos in the Cloud”

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Portenta C33
Sponsored by Mouser Electronics and Arduino and Renesas
In this episode of Chalk Talk, Marta Barbero from Arduino, Robert Nolf from Renesas, and Amelia Dalton explore how the Portenta C33 module can help you develop cost-effective, real-time applications. They also examine how the Arduino ecosystem supports innovation throughout the development lifecycle and the benefits that the RA6M5 microcontroller from Renesas brings to this solution.  
Nov 8, 2023
22,499 views