feature article
Subscribe Now

Merging Lanes

Will FPGAs Re-converge?

For years, we had just “FPGAs.”  We didn’t have today’s high-end, low-cost, value-based, platform-enabled, I/O optimized, low-power, DSP-enhanced, SerDes-enriched flavors.  The very nature of FPGAs was to be generic.  They were, after all, reprogrammable devices that could tackle any task assigned to them (provided, at the time, that task didn’t require more than about 30MHz performance or more than a few thousand look-up-tables or any memory or even the merest modicum of power efficiency.)  In other words – glue logic.

From those humble beginnings, the floodgates opened.  FPGAs got bigger and faster.  This rocket ride on the Moore’s law missile left behind it a trail of smaller and slower FPGAs – which, of course, had every bit as much utility as when they were originally designed.  These FPGAs were primarily used in applications like telecommunications infrastructure – at a time when that industry had an almost insatiable appetite for density and performance combined with seemingly unlimited price and power budgets. 

As the telecom boom went bust, however, FPGA companies went looking for greener pastures, or even more pastures that were the same color green, or even just pretty much any pastures at all where people might possibly prefer programmable logic to lower-volume ASIC implementations.  Of course, this meant that the architects of programmable logic devices had to start listening to a much more diverse audience of customers than ever before.  Instead of just blindly chasing the technology curve with density and performance, they had to begin to make delicate tradeoffs among things like features, performance, cost, density, power, and pinout. 

For different markets and applications, different things were important.  In order to serve all those masters, FPGAs started packing on the pounds with additional features for just about every conceivable situation.  Processor cores, memory, multipliers, complex clocks, a bevy of I/O standards – all led to big, costly, power-hungry Swiss-army knife devices that could do almost everything but weren’t particularly well suited to any particular task.  Almost all at once, FPGA companies abandoned the one-size-fits-all strategy for broad portfolios of devices with varying mixes of capability.

This diversification broke down first along one very strong line – price.  It was untenable to produce devices that were both cost- and performance-optimized.  Our FPGA universe bifurcated, and new “value-based” or “cost-optimized” devices such as Xilinx’s Spartan series, Altera’s Cyclone series, Lattice Semiconductor’s ECP series and Actel’s ProASIC series appeared on the scene.  Though each of these lines had significant differentiators that distinguished it from the others, the one thing that all of them had in common was that they were designed with careful attention to silicon area and manufacturing cost.  These FPGAs were destined for non-traditional programmable logic applications that had enough volume and unit cost pressure that they couldn’t previously consider the old-guard FPGAs with 4 digit price tags. 

Low-cost devices went on a serious diet, and their prices plunged to the single-digit dollar mark – not even sharing an order of magnitude in price with their performance-packing predecessors.  The FPGA Universe was once again clean.  You needed either a low-cost device or a high-density/high-performance device.  There was no middle ground or indecision. 

At this point, a new problem emerged – competition.  To win the traditional FPGA market game of one-upsmanship, it was necessary to always have one or two things that your competitor’s family did not.  In high-end FPGAs, this had always been a fair fight.  The first team to develop a new process node or to integrate a new I/O standard or to have the fastest DSP blocks or the most memory was the winner.  There was no tradeoff to be made. 

Unfortunately, when dealing with the new split-level silicon market, the low-cost family was not a good player.  Sure, it was easy enough to one-up the competition by just sliding a feature from your high-end platform down to the low end, but every time you pulled that stunt, the difference gap between your two families shrank, and your risk of cannibalizing your high-end family with more capable low-end devices increased.  Every time you dropped something into the low-cost devices that was previously the purview of high-end programmable logic, you de-valued the high-end in exchange for low-cost bragging rights.  It was a slippery slope.

When there were just two primary players in the race – Xilinx and Altera, a bit of a balance could be achieved.  Both companies were careful to protect the legacy of their high-end families by not falling too quickly to the temptation to out-press-release the other by sliding something important down into the low-cost line just for bragging rights.  Unfortunately for them, the world is made up of more than two companies.  Challengers like Lattice and Actel, with no high-end markets to protect, slid into the game and posted strong challenges to the two big companies’ low-cost positions.  Lattice took the conventional low-cost FPGA and added things like full-featured DSP blocks (others had just multipliers), built-in boot flash and, most recently, SerDes I/O for popular standards.  Actel hammered on with the benefits of non-volatility in their devices – security, power consumption, single-chip, and live at power-up became battle cries that burned the big two in engagements where those capabilities were highly valued.

Soon, the big players were forced to move.  Xilinx augmented its low-cost offering with low-power modes, security features, and even a system-in-package-style stacked die that combined flash memory with a conventional SRAM FPGA to make a virtual non-volatile device.  Altera rallied with more robust DSP features and, most recently, SerDes I/O in a low-cost family.  It would be reasonable to assume that, over time, all of the challengers’ key differentiators will need to be addressed in some way by the big two.

At the same time as this beefing-up of the low-cost families, the high-end has seen interesting changes as well.  Both Xilinx and Altera diversified their high-end families (Virtex and Stratix) along more than just the previous density (and performance, if you count speed grades) lines.  They began to offer variants with and without SerDes transceivers, and with varying mixes of other features like memory, DSP blocks, and processor cores.  They also began to stretch their high-end families so that the difference between the largest and smallest (and the most expensive and cheapest) devices was greater than ever before. .

With all this branching and convergence, the gap between low-cost and high-end FPGAs seems to be closing.  The performance and price step between the two styles has become more of a continuum, and the market demands for advanced features like SerDes, DSP blocks, advanced memory interfaces, and embedded processor capability in low-cost devices has all but erased those as high/low differentiators. 

What we see emerging today, instead of just two distinct classes of FPGAs, is a broad and fairly continuous, multi-dimensional spectrum of silicon with fine-grained steps in capability and cost.  Eventually, this will find its way back into even the branding.  With the arrival of their newly-announced Arria family, Altera has already scrapped the two-tier model for at least a three-tier one.  The other companies are likely not too far behind. 

What this should mean for the FPGA-using community is more capability for less cost, finer control over matching the mixture of features to your problem, and better matching of programmable logic capabilities to particular problem domains rather than arbitrary clustering based on density and speed.  Watch carefully over the coming months.  It should be an interesting ride.

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
72,044 views