feature article
Subscribe Now

Proliferating Programmability in 2014

Forecasting the FPGA Future

The ball has dropped, the bubbly sipped, and the resolutions resolved. 2013 has ended, and before us we have a new year, a new universe of opportunity, and a crazy cauldron of activity in our beloved world of programmable logic. It’s time to throw down, gaze into the crystal ball, read the tea leaves, interpret the Tarot, and extrapolate the trend lines. Here, then, is our unflinching forecast for FPGAs in the months and years to come.

Before we fire up our forecast fest, we should nail down what we mean by “FPGA.” After all, the definition has been morphing, expanding, and shifting over the years, and even the companies with thousands of employees dedicated to nothing but making and selling FPGAs don’t seem to agree on the current meaning of the acronym. Ours will be simple – if it has a look-up-table (LUT) cell, it is an FPGA. (Yes, we hear the screams out there. Bear with us. It will all come out in the wash.) 

This definition includes a crazy range of semiconductor devices. Xilinx’s upcoming 20nm, TSMC-fabbed, interposer-based, 4.4-million-logic-cell UltraScale device? Yes, it’s absolutely an FPGA – with a likely four-or-five-digit price tag. How about Lattice Semiconductor’s almost-microscopic (1.4mmX1.4mm BGA), 25-microwatt, 384-cell iCE40 device? Yes again – also an FPGA – with a volume price of under fifty cents.

Devices that used to be CPLDs have actually been FPGAs for a long time. Nothing new to see there, move on along. Similarly, devices that are being marketed without the FPGA label – also FPGAs in our book. Sorry, Xilinx, that means we are gonna slap the F-label on your Zynq 7000 “All Programmable extensible processing platform SoC,” just as we will on Altera’s similar Arria and Cyclone SoC FPGAs. Likewise, QuickLogic’s CSSP (for customer-specific standard product) devices will get F-ed by us – just because there are a bunch of LUTs in there doing the dirty work. Microsemi’s SmartFusion2 SoC FPGAs and Igloo2 FPGAs? Just like the names say. How about the new, radical, interlopers such as Tabula’s “spacetime” ABAX 3PLD devices, or the Achronix “Speedster” 22nm, FinFET-having, million-LUT, HD1000? Yup. FPGAs as well.

Of course, FPGAs can still do what FPGAs have always done best – hustle packets from point A to point B in the machines that power the global information superhighway. Certainly the biggest share of business for companies like Xilinx, Altera, Tabula, and Achronix still comes from that market segment. But today’s FPGAs can also scale displays on mass-market micro-power consumer devices, crank signal processing algorithms at speeds that dedicated DSP chips can only dream of, and give life to powerful technologies like software-defined radio. They can put the intelligence into embedded vision, increase the effectiveness of sophisticated radar systems, and bridge communications gaps between otherwise-incompatible standards on circuit boards. They can buddy up with microcontrollers and sensors to give situational awareness to devices in the Internet of Things, and they can bust out some incredible performance numbers when applied as application accelerators in supercomputing servers.

So, FPGAs can span something like five orders of magnitude in cell count, cost, and power. They can contain sophisticated high-performance processing subsystems, screaming fast SerDes, DSP accelerators, volatile and non-volatile memory, and a truckload of special features. They can come in a variety of novel architectures and configurations, bending them toward a huge range of application domains. Forget all the generalizations you’ve heard regarding size, performance, power consumption, or cost. FPGAs are just about everywhere and can do just about anything.

With such a broad definition and applicability, it would be easy for our concept of “FPGA” to lose its focus as we move into the future. Indeed, it arguably already has. More than anything, FPGAs are synonymous with programmability at every level – and not just in the “software” sense. While processors allow the flexibility of software programming, FPGAs allow that plus hardware, IO, and often analog programmability. That means they can go places and do things that simple processors cannot. But what things? With the capability of processors steadily improving, won’t we reach a day when everything can be done in software and FPGAs will be obsolete?

That question brings us to the prognostication phase of our tale. Where will this behemoth technology turn next? Will FPGAs lose their identity altogether? Will processors get so good and so fast that everything can really be done in software, and we won’t need programmable hardware anymore?

For us, the key differentiating property of FPGAs is power-efficient parallel computation. FPGAs can run certain types of algorithms much faster than any processor ever built, and with significantly less power being consumed in the process. Since the biggest issue with server farms and data centers these days is getting the power in and the heat out, and with power becoming the most critical design constraint on everything from mobile phones to network switches, we will reach a point where computational power efficiency is the single most important design metric. In that world, FPGA architecture reigns supreme.

But there are significant barriers to FPGAs becoming the ubiquitous heterogeneous compute engines of the future. For at least two decades, EDA companies have talked at length about the challenge of “hardware/software partitioning” – which boils down to deciding which functions of a particular application are best done in hardware and which in software. Where the algorithm is broadly complex, software is key to capturing the complexity within the scope of a reasonable amount of silicon area. Where nested loops and other constructs put computation speed at a premium, optimized hardware implementation pays huge dividends. However, the problem is figuring out which is which, splitting up the vast middle ground, and then – the ultimate challenge – finding a clean, automated means of putting those hardware bits into actual hardware. 

The modern stable of FPGAs that combine conventional processors with FPGA fabric, memory, and IO begin to approach an ideal heterogeneous computing platform from the hardware side. They have everything you’d need in an ideal world – fast, multi-core processors to get the most punch from the software part, generous helpings of LUT fabric to create hardware accelerators and peripherals for the hardware part, and (perhaps most often overlooked) copious amounts of high-bandwidth memory right on the chip. If you started with a clean whiteboard and wanted to design the ultimate power-efficient heterogeneous processor, you’d make something that looks like a current SoC FPGA. 

The problem is, the software tools are still far from mature. Even though EDA has been working on hardware/software partitioning, high-level synthesis, mixed-mode simulation, and other technologies required to make one of these chips as user-friendly as a conventional SoC, we are nowhere near that milestone yet. While there are respectable efforts in high-level synthesis, compilation of languages like OpenCL, and semi-automated assembly of DSP datapaths from tools like Matlab and Simulink, we are not approaching the time when a fresh-out-of-school software developer can “write code” for an FPGA SoC, compile it, and have an optimized application whirring away.

The battle for the future, therefore, will be not in hardware but in software – not in semiconductor technology but in tools and IP. The long-term future of FPGAs will belong to those who can come up with a tool flow that makes these amazing devices usable by the masses.

Of course, hardware will still matter. In the short term, there is a battle raging between Xilinx, Altera, Tabula, and Achronix to bring the latest, fastest, FinFET-based conventional FPGAs into the waiting arms of the communications sector. Ironically, Tabula and Achronix got there first, but without the clout and experience of their much larger competitors. Xilinx and Altera are each working feverishly on their own FinFET offerings, and it isn’t at all clear who will get there first – or if that will even be a factor in the long-term adoption of the devices.

While that contest continues, many other battles in the FPGA space run virtually uncontested. Lattice, Microsemi, and QuickLogic are all happily cranking out interesting FPGAs that address important segments of the market – mostly without significant competition. Each of them has analyzed specific application areas and created devices optimized for those applications – which gives them a compelling advantage going after the sockets they seek. If one of their sales people knocks on your door, you will probably want their chips because they wouldn’t waste time on the call if they didn’t already know the answer. The question for each of them is how well they selected their target market and application. If one of them explodes as part of the “next big thing,” those companies could quietly rise to prominence.

So, while the big companies continue to make noise about who is first on the next node, the FPGA market will grow for other reasons. As long as the world’s appetite for bandwidth continues to grow, the traditional FPGA stronghold of communications infrastructure will remain a lynchpin of the industry. But with all of the new and exciting applications for programmable logic – and particularly SoC FPGAs – popping up all the time, the day may be near when that segment will no longer define and control the direction of FPGA market. 

The long-term future of FPGAs will depend on how well the big companies do with engineering their tools. The EDA industry seems to have mostly abandoned the FPGA implementation flow, but they could find themselves back in the middle of it if their latest high-level design and verification methodologies begin to find a home in FPGA design flows. In order for FPGAs to achieve widespread adoption as heterogeneous computing platforms, however, those technologies will have to mature to the point that they are no longer thought of as “design tools,” but rather more like “compilers” – low-cost end-user tools that assist in the programming of the machine rather than the design and implementation of it.

2014 will likely see only the first baby steps of these trends. Specifically, watch the adoption of Xilinx’s Zynq and Altera’s SoC FPGA platforms. See if Tabula and Achronix wrestle any of the core communications market from the grip of the big two. Watch for Lattice devices to go viral in enormous volumes in mobile devices like smartphones and tablets. See Microsemi’s devices deployed in numerous security-conscious, lower-volume applications. Watch for QuickLogic to hit a home run in ultra-low-power sensor fusion. Finally, brace for a war of words around design tools from Xilinx – touting their all-new cutting-edge Vivado suite, and Altera – reminding us that they’ve had a leg up in tools for years with their proven Quartus II system.

It will be a fun year in FPGA land!

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Power Gridlock
The power grid is struggling to meet the growing demands of our electrifying world. In this episode of Chalk Talk, Amelia Dalton and Jake Michels from YAGEO Group discuss the challenges affecting our power grids today, the solutions to help solve these issues and why passive components will be the heroes of grid modernization.
Nov 28, 2023
16,435 views