feature article
Subscribe Now

Chasing Rainbows

The Myth of ASIC Replacement

With the predictability of a sunrise, the Moore’s Law heartbeat has throbbed its way into the collective consciousness of electronic designers. Every two years or so, the industry visits upon itself a new semiconductor process node, and the implications of that change ripple across the surface of the already-turbulent waters of the industry. Each time, we are amazed anew. Each time, we have to re-write our understanding. Each time, we are emboldened to go out into the world and announce, “Now, we have finally arrived! FPGAs can replace ASICs once and for all!” Then, we see our shadow and go back underground for two more years of winter. 

FPGA marketers have been chasing the myth of ASIC replacement for a couple of decades now. Like a mirage, the goal appears on the horizon – just out of reach. With each new generation of FPGAs, we think, “This must be the one. Our density is doubled. Our performance has increased again. Our power consumption has improved. Our cost has dropped. Our tools are more robust than ever. Now is the time that we will finally conquer the elusive ASIC and banish the term once and for all to the planet of the 8-track tapes.”

There are a number of problems with this line of thinking. First, ASICs get new process nodes too. Every time FPGAs get a boost in speed, capacity, and power efficiency, the latest semi-custom chips get the same extra kick. Of course, for ASIC folks, each new process node also brings a new penalty – higher non-recurring engineering (NRE) costs. 

The second problem is that we don’t really have a clear definition of “ASIC” anymore. In fact, “ASIC” was a bit of a misnomer to begin with – going all the way back to the early 1980s with the first gate arrays, our taxonomy has always been a little flawed. ASIC, of course, stands for “Application-Specific Integrated Circuit.” The best way to define “application specific” is to look at the kinds of chips that are NOT application specific. Clearly, memories, most processors, power supplies, discrete peripherals and, yes, even FPGAs are “standard parts” that can be used across a wide variety of applications. By contrast, chips like some of those made by Qualcomm are clearly “application specific” – they do one application: cell phone. Of course, chips like those have taken on the name Application Specific Standard Part (ASSP). This name just confuses things more — “application specific” and “standard part” used to be opposites.

The most descriptive name we have used is probably “semi-custom” – meaning parts that are fabricated mostly in an application-independent way but are customized for specific applications with a small number of operations at the end of the process. Early gate arrays and later “structured ASIC” devices fit this description. However, today’s options blur even those lines.

At the recent Design Automation Conference, I moderated a panel entitled, “Will Your Next ASIC Ever be an FPGA?” The panel was populated with industry experts from Xilinx, Altera, IBM, Huawei Technologies, Juniper Networks, and Advantest America. These luminaries represented the current ASIC and FPGA manufacturers as well as typical customers of both. The discussion was interesting, but I couldn’t shake the nagging feeling that we were answering the wrong question. The FPGA vendors pointed to recent advances in 28nm devices, of course, showing that the capacity, performance, and power efficiency were better than ever, and making the case that these devices could do anything an ASIC could do. The ASIC camp pointed out that they still hold an order of magnitude advantage in density and power efficiency, and sometimes in performance as well. The FPGA-ers countered that ASIC NREs were astronomical – in the tens of millions per chip in total development costs. The ASIC-ers replied that the FPGAs that were being touted as ASIC beaters cost thousands of dollars per part.

None of these arguments are new. In fact, nothing has changed in the nature of these arguments for the last ten years. Each side has added zeroes to their datasheets, but nothing has substantially altered the competitive landscape between the two seemingly alternative ways of getting a chip that does what you want. But are these really the options?

To me, it seems that there is one very large decision at the beginning of any new product development process. Will we have to fabricate custom silicon for this design, or can we build what we need using off-the-shelf parts of some type? Custom silicon – whether you call it ASIC or COT or some other name – has enormous barriers to entry. You need tens of millions of dollars of budget, a large and very sophisticated engineering team, ultra-complex tools, a mature development methodology, a high tolerance for risk, and at least two to three years of available time for the product design process. Most companies do not have these things. The few that do (and I’d argue that there are less than 100 such companies worldwide) are able to take advantage of those capabilities to create products that cannot be made with any other means – including FPGAs. 

If a product can be developed with off-the-shelf parts, it probably should be. There is little in the end that can justify the extraordinary cost and complexity of a custom chip development process if the same product can be built with off-the-shelf components. The interesting products for custom chip development are those that cannot be made any other way. If you require performance, power efficiency, or levels of integration (or form-factors) that can’t be realized with off-the-shelf components, then you can use custom silicon to build a highly differentiated high-margin product that will be difficult for competitors to replicate.

More often today, however, products are differentiated in software rather than hardware. Increasingly, our chips are also all starting to look a lot alike. The term “SoC” can be applied to almost every chip rolling out of the fab. Just about every “core” device we see these days has one or more processors, some memory, a handful of on-chip peripherals and accelerators, and a bunch of I/O that conforms to various industry standards. Today’s state-of-the-art FPGAs tend to “harden” most of those functions, which puts them on a level playing field with ASIC designs for those capabilities. In fact, if you look at a device like Xilinx’s Zynq and Altera’s upcoming “SoC FPGA,” a huge portion of it is identical to what you’d have in just about every typical ASIC. The only difference is that the FPGA company throws in a bunch of LUT fabric so that additional hardware capabilities can be added and customized by the end user. 

Perhaps a more appropriate title for our DAC panel would have been “Will Your Next SoC Have FPGA Fabric?” We did, in fact, discuss this issue. The ASIC camp pointed out that they had offered FPGA fabric as an option for years, and that few customers had actually designed it in. Further, they explained, the customers that had designed in FPGA fabric had ended up not using it and had designed it out in subsequent revisions. Everyone on the panel agreed that programmability is required to make an SoC useful and adaptable in the field. The magic question was whether hardware programmability is required or whether software programmability gives enough flexibility to handle all the changes we throw at a typical product.

However, as we have discussed many times, the usefulness of FPGA fabric is predicated on the availability of a robust set of tools and IP. That is something the FPGA vendors have spent monumental amounts of time and money to create. Just putting FPGA fabric on an ASIC does not give you the same capability that an SoC with FPGA fabric will provide if that whole platform is delivered by an FPGA company. The viability and usefulness of FPGA fabric on SoCs in that environment is still relatively untested in the market, but FPGA companies are betting big that it will be important.

It is interesting that FPGA vendors continue to chase the “ASIC replacement” concept. In reality, only a tiny fraction of today’s product designs use ASICs. However, those do tend to be the designs with the highest production volumes. It is more interesting to see how often FPGAs can replace off-the-shelf SoCs or how often they can avert the need to create ASSPs. In some cases (video display drivers, for example), FPGAs were so useful and at such a compelling price point that one could argue that a whole generation of ASSPs and/or ASICs was avoided. TV sets and computer monitors have shipped in volume with FPGAs on board.

In reality, FPGAs have been replacing ASICs all along. One by one, applications have fallen out of the range where an ASIC is required – or even feasible, given the staggering development cost and complexity. For those applications, FPGAs, off-the-shelf SoCs, and ASSPs have picked up the slack and won the sockets. In the long run, FPGAs should be focusing their competitive energies against standard SoCs rather than ASICs. In more and more applications, an off-the-shelf SoC will be able to do the job, and design teams will design it in. The question will then become, “Do I need hardware programmability, and am I willing to pay a premium to add that capability to my SoC?” The answer to that question may define the future of the FPGA industry.

3 thoughts on “Chasing Rainbows”

  1. FPGA companies have always talked about replacing ASICs, but are they just chasing rainbows? Is ASIC replacement even a valid goal? Are we even sure what an ASIC is these days?

Leave a Reply

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Miniaturization Impact on Automotive Products
Sponsored by Mouser Electronics and Molex
In this episode of Chalk Talk, Amelia Dalton and Kirk Ulery from Molex explore the role that miniaturization plays in automotive design innovation. They examine the transformational trends that are leading to smaller and smaller components in automotive designs and how the right connector can make all the difference in your next automotive design.
Sep 25, 2023
25,518 views