feature article
Subscribe Now

(Moore’s) Law of Diminishing Return

When Do We Have Enough?

Following the semiconductor industry for the past few decades, we’ve seen something unprecedented in human history. There has been a sustained exponential growth that has survived for over four decades, with resulting numbers that are absolutely mind-boggling. Analysts and writers have struggled to find the appropriate metaphors: “If the auto industry had done this, all cars would now travel faster than the speed of light and get over a million miles per gallon.” The attempts all seem to fall short of giving the audience a grasp of the magnitude of this accomplishment. 

In the FPGA industry (which is about three decades old), there has been even greater than Moore’s Law progress. FPGAs started out toward the end of each process node. The new upstart companies were not at the front of the line for the merchant fabs, so they got the latest technology later than the leaders. As time went by, the FPGA companies migrated to the front of the line. In addition, FPGAs have made architectural gains that have helped them outpace Moore’s Law in a number of ways. As a result, the programmable logic devices of today bear scant resemblance to what we had a few years ago. With the almost incomprehensible increase in density, a “good enough” improvement in speed, and remarkable transformations in power consumption, today’s FPGAs fill a completely different set of design and market needs from those of the past.

FPGA companies have had to adapt to this change, and it hasn’t been easy. Field applications engineers (FAEs) – always the “secret weapon” of the FPGA industry – used to be universal. They could show up at your location with the design tools on their laptops, tweak a few configuration options, swap around a few lines of VHDL, and get your design humming right along in an hour or two. Nothing you would be doing with their FPGA would surprise them. They could handle it all.

Today, the FAE has to be more specialized. FPGA users may have problems that require a deep knowledge of specific areas ranging from signal integrity and board design with multi-gigabit transceiver links, to DSP algorithms being implemented with hardware acceleration in FPGA fabric, to embedded operating systems running applications on multi-core embedded processing subsystems inside the FPGA. Any one of those topics could be a career-long study for a true expert. FPGA companies have had to divide and conquer – training teams of FAEs in different specialties.

Also, as this evolution has progressed, the FPGA has moved from being a tiny part of most systems – “glue logic” thrown down at the last minute to bridge incompatible standards on a board, to “programmable system on chip” where most of a system’s capabilities are integrated into the FPGA. Now, the biggest reason to put anything NOT on the FPGA is a requirement for some special process. Analog, memory, and other “specialized” chips are some of the last holdouts that couldn’t be put into your FPGA design.

With each of the past four or five generations of FPGAs, the industry has declared victory. “This time,” they say, “FPGAs are TRUE ASIC replacements.” Each time, it’s at least partially true. With each new process node, ASIC, COT, and custom chip designs in general become exponentially more expensive, and fewer and fewer companies have the resources and/or the need to design a custom chip. As applications fall off the ASIC truck, they generally land softly in the FPGA net. They have to make some compromises, of course. Unit costs are much higher but are offset by dramatically lower NRE and design costs and risks. Power consumption is far worse than full custom chips, but usually “good enough” – and the gap is closing with each new generation. Performance is nothing like full custom, but it is also “good enough,” and the lack of ultra-high clock frequencies can be offset by clever use of parallelization.

However, this repeated claim of “This time, FPGAs have arrived” has started to have the feeling of crying wolf. From the early days when FPGA vendors boasted millions of “system gates” only to have the reality shown to be mere thousands of “ASIC-gate equivalents” the FPGA companies have tarnished their own credibility with extravagant claims. The thing is, now that the reality of those years-long claims is actually coming to be, will anyone believe them? The latest 28nm and soon to be 22nm FPGAs (with smaller geometries already in the works) have a remarkable amount of capability. They can certainly keep pace with custom chips that are only a process node or so behind them, and for many of their functions (such as high-speed serial connectivity) they are at the forefront of capability. 

FPGAs, by most any measure, have arrived. With densities now reaching 2 million look-up tables, they can replace custom devices in all but the most demanding applications, and they can bring that capability to market years before comparable ASSPs can follow with standardized, mass-market chips. With each passing process node, the “FPGA penalty” grows smaller. Unit prices decrease, power disadvantages diminish, and functional and performance capabilities pass the “good enough” line for a larger and larger subset of potential applications. Now, with heterogeneous processes being possible within single FPGA packages, even process-incompatible functions like analog and non-volatile memories can potentially be included in FPGAs.

This brings up the next logical question in the evolution of FPGAs: When do we hit the point of diminishing marginal return? Already, we are seeing a narrowing of the list of applications that require the biggest-baddest FPGAs that the vendors can produce. Ironically, one of the “killer apps” for the biggest FPGAs is prototyping custom chips. If we hit the point where custom chips on the latest process nodes are out of reach for everyone but a tiny set of elite companies, will the need for the largest FPGAs disappear as well? If so, that leaves us with the rest of the FPGA family lines to battle it out for the market. 

In this scenario, no longer will “world’s largest” or “world’s fastest” be worth much, except as bragging rights. The vast majority of designers will be selecting their FPGAs from the middle of the range, and the company that provides the best fit of capabilities at the right price for any particular application will win the socket. Emphasis will shift from “bigger, faster” to “cheaper, more efficient”. At some point, when the BOM contribution of the FPGA and/or the power consumption of the FPGA becomes irrelevant in the big picture of system design, FPGAs could truly enter the realm of commodity – much as DRAM memory devices are today.

It also becomes a bit wolf-crying-like to constantly be claiming that FPGAs are at a crossroads or a turning point. However, in the lifespan of this interesting technology, it seems to be often true. Perhaps that is the inherent nature of the sustained exponential. No matter how amazed you are at what you’ve already seen – you ain’t seen nothin’ yet.

2 thoughts on “(Moore’s) Law of Diminishing Return”

  1. There has been a lot of debate over the years about when Moore’s Law will end. However, another interesting question might be: When will we stop caring?

  2. As Moore’s Law gives us more and more logic every node, the question is not only “When FPGAs will replace ASICs for the most demanding applications?” but also “When will single chip solutions eliminate the need of multi-board, multi-chip systems?”. In the networking industry we are still very far away from that point. You still see many switch/routers implemented as complicated chassis with many line-cards with multiple chips on them. As time goes on it is possible to build more capable “single-chip” (putting aside memories phy-devices etc.) switch/routers. However in an ASIC this might not be economical. FPGAs seem to be a good way to proceed replacing expensive and complicated chassis based solutions with a more compact system with FPGA and some peripherals. Toward that end a significant improvement in frequency and amount of LUTs is still desired.

Leave a Reply

featured blogs
May 25, 2023
Register only once to get access to all Cadence on-demand webinars. Unstructured meshing can be automated for much of the mesh generation process, saving significant engineering time and cost. However, controlling numerical errors resulting from the discrete mesh requires ada...
May 24, 2023
Accelerate vision transformer models and convolutional neural networks for AI vision systems with the ARC NPX6 NPU IP, the best processor for edge AI devices. The post Designing Smarter Edge AI Devices with the Award-Winning Synopsys ARC NPX6 NPU IP appeared first on New Hor...
May 8, 2023
If you are planning on traveling to Turkey in the not-so-distant future, then I have a favor to ask....

featured video

Find Out How The Best Custom Design Tools Just Got Better

Sponsored by Cadence Design Systems

The analog design world we know is evolving. And so is Cadence Virtuoso technology. Learn how the best analog tools just got better to help you keep pace with your challenging design issues. The AI-powered Virtuoso Studio custom design solution provides innovative features, reimagined infrastructure for unrivaled productivity, generative AI for design migration, and new levels of integration that stretch beyond classic design boundaries.

Click here for more information

featured contest

Join the AI Generated Open-Source Silicon Design Challenge

Sponsored by Efabless

Get your AI-generated design manufactured ($9,750 value)! Enter the E-fabless open-source silicon design challenge. Use generative AI to create Verilog from natural language prompts, then implement your design using the Efabless chipIgnite platform - including an SoC template (Caravel) providing rapid chip-level integration, and an open-source RTL-to-GDS digital design flow (OpenLane). The winner gets their design manufactured by eFabless. Hurry, though - deadline is June 2!

Click here to enter!

featured chalk talk

Industry 4.0: From Conception to Value Generation
Industry 4.0 has brought a lot of exciting innovation to the manufacturing and industrial factories throughout the world, but getting your next IIoT design from concept to reality can be a challenging process. In this episode of Chalk Talk, Adithya Madanahalli from Würth Elektronik and Amelia Dalton explore how Würth Elektronik can help you jump start your next IIoT design.
Apr 17, 2023
5,403 views