feature article
Subscribe Now

Programmable Pile of Parts

Big FPGAs Go Off the Rails

FPGAs were conceived as “do anything” chips – Jacks of all Trades. Sure, they were crazy expensive for the number of effective gates they offered, they were a lot slower than custom logic doing the same task, and they drank copious quantities of coulombs getting the job done, but they could be programmed to do exactly your bidding. In a lot of designs, that made the FPGA the no-longer-missing link – the glue that connected the other components together. Need two incompatible interfaces bridged? Throw in an FPGA. Need an IO that wasn’t provided by your SoC? FPGA saves the day. Want to extend the life or usefulness of your ASIC? Park an FPGA next to it.

At this point, we suggest for your consideration the venerable Swiss Army Knife (SAK) analogy. Yep, we know we’ve used it before, but this is version 2.0. There are undoubtedly marketers behind the Swiss Army Knife, and they must periodically have to make decisions on what is to be included in the next-generation version. Yes, the SAK can perform the job of a variety of tools – knife (of course), screwdrivers – both Phillips and slot – saw, toothpick…. As you get to the higher-end versions, additional accesories come along for the ride – bottle opener, can opener, fork, spoon, an awl – but the increased capabilities come at a cost. The SAK is heavier, bulkier and more expensive. It’s harder to find the particular thing you need at any given time.

Because the LUT-based programmable fabric of FPGAs is expensive, slow, and inefficient, FPGA companies cleverly began to put blocks of hardened logic in for common functions. Arrays of hardened multiply-accumulate blocks appeared to help out with DSP-like tasks. Block RAMs sprouted up. Hardened interfaces for PCIe, Ethernet, DRAM, and more lined the periphery. Processors and whole processing subsystems materialized. FPGAs became full-blown systems-on-chip that also happened to have some do-whatever-you-want programmable fabric thrown in – veritable Swiss Army Knives of digital logic. Oh, and maybe even a little analog for good measure.

There is also a downside to this game, however. The more things you harden, the more chip area you take up with things that may or may not be useful for any particular application. Some folks don’t need any of the DSP blocks, for example, and others want as many as you can provide. As the FPGA wars heated up, the FPGA companies realized they couldn’t continue with the one-device-fits-all mentality. They began to specialize devices for various classes of applications. Some maximized DSP resources, some provided more high-speed serial I/O, and some came with larger amounts of RAM. Getting the proportions of hardened logic right was a marketing balancing act. Teams of marketers analyzed the needs of various market segments and tailored classes of FPGAs just for their needs. FPGAs became less general purpose and more domain-specific.

As Moore’s Law has pushed us toward higher and higher levels of integration, the temptation to throw more stuff onto FPGAs was just too strong for the manufacturers to resist. And, as the cost of mask sets rose generation after generation, making more variations became steadily less practical. Rather than building a chip for market A with one set of resources and a chip for market B with a slightly different mix, why not just throw enough of everything on a single chip to fit both markets? Data sheets grew steadily more impressive. Just like the high-end SAK, the longer the list, the better, right?

High-end chips became so capable that manufacturers grew reluctant to continue calling them FPGAs. After all, if you make a chip with multiple types of processors (multi-core applications processors, real-time processors, graphics processors), all the associated peripherals, built-in DSP resources, copious amounts of on-chip memory, high-speed interfaces for off-chip memory, SerDes multi-gigabit IO… and – some FPGA LUT fabric. Why call the thing an FPGA? It’s really a system-on-chip that happens to also include some programmable logic.

This never happened with the Swiss Army Knife, for some reason. Even though a vanishingly small percentage of the device is still a “knife,” they never reached a point where they said “OK, the fork thing did it! Clearly if it’s got a fork, we shouldn’t keep calling it a knife. Let’s have the world’s first Radically Multi-Function Hand Tool (RMFHT). Probably, they tried but got in too many arguments about the appropriate way to pronounce the acronym RMFHT.

Luckily, there is no Moore’s Law for hand tools. If there were, we would probably have SAKs today with radial arm saws, drill presses, convection ovens, oscilloscopes, pneumatic hammers, 3D printers…. They’d still fit in a pocket, not be very good at anything, and they’d set your well-heeled Boy Scout back about $20K.

Here’s the problem. As you may have noticed, the Swiss Army Knife can do a very wide variety of tasks, but it is never the best choice for anything. Nobody sets their fine dining table with the SAK fork as the go-to implement. If a chef wants a good knife to use preparing a meal, you’ll never see him or her reach for their trusty SAK. Go to any mechanics’ workshop and you won’t see an SAK in their hands instead of a screwdriver. In fact, you probably won’t see one in the shop at all. And, the toothpick? We don’t even need to get started on that.

The next generation of FPGAs (and, as we said, manufacturers are even trying to stop calling them FPGAs) are some of the most complex chips ever attempted. And, as a friend of mine says, “complexity is the enemy of… everything.” By leveraging single-digit nanometer manufacturing technology to create the latest “Jack of all Trades” device, we may have finally arrived at the “master of none” situation. The data sheets for these devices literally look like a parts catalog. In fact, perhaps we should think of them as “Catalog on a Chip” (CoC) devices. For any given application, it seems unlikely that a large percentage of the resources and capabilities available will be used. But the obvious step of “use a smaller device” will be blocked by some particular need of the application – the one with enough processors, or the one with enough IO, or the one with enough DSP resources. That means that, despite being able to meet the needs of many systems, it becomes far less likely that an FPGA will be the best choice in many situations.

Smaller FPGA companies actually realized this situation years ago. In order to compete and coexist with the two dominant FPGA suppliers, they had to focus and specialize their offerings for specific markets. By tailoring their FPGAs, IP offerings, tools, and service to targeted markets, they could beat the big guys in a particular niche. By fortifying their positions in those areas, they could steal enough business from the big two to make a living. Meanwhile, the big two players focused on outdoing each other with more capable high-end devices.

Now, the big players have also found the need to focus on specific opportunities. Data center acceleration, for example, has proven far too attractive to leave to chance – or to the other guys. As both Intel and Xilinx have turned their resources to square off against each other for that battle, their FPGA offerings have begun to bend. Will that new built-in bias affect the usefulness of their parts for other application areas? Will smaller suppliers capitalize on the distraction to steal away market share in other lucrative markets?

Further, with the sudden acceleration of embedded FPGA IP (eFPGA), large companies who want some programmable logic on their custom chip now have attractive alternatives that don’t involve paying the kinds of margins that the traditional FPGA companies have always enjoyed. Instead of buying (or building) an SoC and parking an FPGA next to it, or buying an SoC with FPGA fabric on it from the FPGA companies, you can design your own SoC with exactly the amount and type of FPGA fabric you need. The FPGA suppliers lose their lock on programmability of hardware.

All of this points to the possibility that the FPGA market itself may simply cease to exist in a few years. As programmable fabric becomes just another type of block on ever-more-integrated devices, the novelty will fade, and tracking programmable logic as its own entity might simply become uninteresting. Certainly nobody talks about being #1 in the UART market or the multiplier market. LUT fabric might not be far behind. It will be interesting to watch.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured chalk talk

Reliability: Basics & Grades
Reliability is cornerstone to all electronic designs today, but how reliability is implemented and determined can vary widely by different market segments. In this episode of Chalk Talk, Amelia Dalton and Sam Accardo from the YAGEO Group explore the definition of reliability for electronic components, investigate the different grades of reliability offered by the YAGEO Group and the various steps that the YAGEO Group is taking to ensure the greatest reliability of their components.
Aug 15, 2024
53,486 views