feature article
Subscribe Now

Kicking a Dead Horse

FPGAs Going the Distance Against ASIC

Imagine seeing the following copy in a modern ad: “The new BMW 5-series sedan outperforms the horse and buggy in every important way. Your family will travel farther in a day and arrive less fatigued thanks to our superior cruising speed, climate-controlled cabin, and luxurious upholstery. It’s so much easier to use as well – no more hitching up the team before you start, and no more watering, feeding, and grooming at the end of the day. You just turn the key and drive away. Simple as that. So, before you snap up that new stallion you’ve been eyeing – consider a car instead.”

You’d probably feel like our Bavarian auto-marketers were out of touch with the times.  Certainly, there was a time when the main mission of the auto industry was replacement of horse-drawn conveyances, but there came a time when the automobile won, and marketers shifted their sights to more serious competition.

As of last week, FPGAs are still fighting full-tilt to steal market share from ASICs.  The problem is, there is almost nothing left to steal.  Sure, FPGAs have a bright high-growth future ahead of them, but it won’t come from luring away ASIC starts.  That battle has been long since won.

Let’s look at the trends.  First, ASIC design starts have been on a steady (and accelerating) decline for years.  This year, some forecasters estimate they will fall by over 20%.  There are a number of reasons for this decline, and most of them have nothing to do with FPGAs.  First, and most often cited, is the cost of developing a current-generation ASIC.  A conservative estimate for a 65nm ASIC project is $10 million USD.  If your only reason for building an ASIC is reduced unit cost, you’d better have a pretty big production volume planned to amortize that kind of expense.  If you’re building a million units, that’s $10 per chip in development costs – even if your silicon was free.  This creates a catch-22 of sorts.  Most of the systems that would support a seven-digit (or larger) production volume are some sort of consumer product.  On the other hand, most consumer devices are so cost-constrained that a $10 development cost on a single chip is too much BOM impact.  This isn’t an ASIC-or-FPGA story.  This is an ASIC-or-nothing story.  If your design requires an ASIC for performance or power reasons, there are many projects that simply become economically unfeasible and never start.

Second is the trend of integration.  During the golden age of ASIC – the 1980s and 1990s – we integrated like mad.  Our systems dwindled from four or five ASICs to two to one; then we pulled most of the rest of the BOM onto the chip as well.  Our original goal was the “system-on-chip,” and by the end of the 1990s, we had ostensibly reached that goal.  Of course, for a static number of systems, that means the number of ASICs required dropped by a factor of three or four right there.  The integration glut didn’t stop there, however.  Convergence took an additional toll.  As devices became more converged, the number of discrete widgets the average consumer carried around dropped.  Today, I’ve got an iPhone.  Therefore, I don’t have a handheld GPS, an MP3 player, a pocket audio recorder, a digital camera… the list goes on.  Those converged products once again require fewer ASICs than the a-la carte versions.

Next we have the added bonus of market consolidation.  When a new technology widget first hits the market, there are often dozens of companies competing for market share.  Most of these companies, however, have long since read Geoffrey Moore’s books on technology adoption, and when the market matures, the non-leaders quickly fall by the wayside.  They either get gobbled up by their more successful competitors or they wilt on the vine.  The net result, however, is that fewer companies are starting new ASIC projects to compete in these maturing and established markets.

The integration and convergence story continues, however.  Today, instead of developing a “System on Chip,” most companies are actually developing “Systems on Chip” devices.  In order to amortize the cost of ASIC development, a single chip supports a number of different products and product variants. The extreme version of this effect is the ASSP.  One company develops the ASIC for a whole bunch of companies’ products.  Hey, ASIC design starts – are you listening? Take two more steps backward.

Examining all these factors, it stands to reason that ASIC design starts were always doomed to drop – all on their own.  Even if the markets for all of the end systems are thriving and growing, the dynamics and economics of ASICs in the Moore’s Law world of exponentially increasing capability and exponentially increasing design cost dictated that ASIC design activity would eventually funnel down to a small number of extremely high-value projects.

Essentially, design-start fruit has been falling off the ASIC tree for about a decade, and FPGA companies have been walking around picking it up off the ground.  

Some estimates today put FPGA design starts at something like 50X those of ASIC.  FPGA starts continue to increase, and ASIC starts are accelerating in decline.  Why then, do we see FPGA marketers remaining so focused on stealing business away from ASIC?  Have they not noticed that this is a battle that is long-since won?  Do they not see the new, much more capable competition quietly approaching from behind?

Part of the problem is certainly inertia.  FPGA companies have run on the “ASIC replacement” platform for so long that it is difficult to escape that mentality and focus on anything else. They’re also hypnotized by the market size mirage.  The ASIC market (standard cell, gate array, and full-custom chips) is estimated at somewhere around 3x to 4x the revenue of the FPGA market.  That revenue differential does a good job keeping the marketing folks in the mind space of eating away at a larger competitor.  The problem is that most of that difference is made up of a few, very high volume applications – hard disk drives, video games – places where we aren’t likely to see an FPGA in the socket any time soon.  If one factors out these super-high-volume, probably-never-good-for-FPGA applications, the ASIC market size difference all but disappears.  

FPGAs won this battle on the back of a single concept – programmability.  Programmable hardware brought us flexibility in the face of changing standards, in-field upgrades, faster time to market, reduced design risk, dramatically lower design costs, and a host of other undeniable advantages.  FPGA marketers should check their rear-view mirrors, however, because a bigger and meaner version of their own weapon is bearing down on them, stealing sockets with the same kinds of arguments about software programmability that FPGA companies have been using against ASIC for years.  As standard embedded processors get faster, cheaper, and more power efficient, the number of interesting applications that can be addressed with off-the-shelf processors (or even boards or modules) is on the rise.  It seems the only thing better than faster, easier hardware design is not having to design hardware at all.  In a food-chain fiesta, FPGA is running down the road gnawing off the tail of ASIC while standard embedded computing platforms are following FPGA and feeding from its tail.  

FPGA companies are defending against this attack, of course, by equipping their devices with both hard- and soft-core processors so that they can reap the advantages of software programmability as well. The outcome of that game, however, will probably be determined by the existence of design requirements that mandate hardware programmability – features where software cannot deliver the performance or power efficiency required.  Designs with these sorts of requirements will remain in the sweet spot of FPGA, while general-purpose embedded platforms have a better-than-even chance of winning where software alone can do the job.

Leave a Reply

featured blogs
Apr 13, 2021
We explain the NHTSA's latest automotive cybersecurity best practices, including guidelines to protect automotive ECUs and connected vehicle technologies. The post NHTSA Shares Best Practices for Improving Autmotive Cybersecurity appeared first on From Silicon To Software....
Apr 13, 2021
If a picture is worth a thousand words, a video tells you the entire story. Cadence's subsystem SoC silicon for PCI Express (PCIe) 5.0 demo video shows you how we put together the latest... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Apr 12, 2021
The Semiconductor Ecosystem- It is the definition of '€œHigh Tech'€, but it isn'€™t just about… The post Calibre and the Semiconductor Ecosystem appeared first on Design with Calibre....
Apr 8, 2021
We all know the widespread havoc that Covid-19 wreaked in 2020. While the electronics industry in general, and connectors in particular, took an initial hit, the industry rebounded in the second half of 2020 and is rolling into 2021. Travel came to an almost stand-still in 20...

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

featured chalk talk

Accelerating Physical Verification Productivity Part Two

Sponsored by Synopsys

Physical verification of IC designs at today’s advanced process nodes requires an immense amount of processing power. But, getting your design and verification tools to take full advantage of the compute resources available can be a challenge. In this episode of Chalk Talk, Amelia Dalton chats with Manoz Palaparthi of Synopsys about dramatically improving the performance of your physical verification process. 

Click here for more information about Physical Verification using IC Validator