feature article
Subscribe Now

Kicking a Dead Horse

FPGAs Going the Distance Against ASIC

Imagine seeing the following copy in a modern ad: “The new BMW 5-series sedan outperforms the horse and buggy in every important way. Your family will travel farther in a day and arrive less fatigued thanks to our superior cruising speed, climate-controlled cabin, and luxurious upholstery. It’s so much easier to use as well – no more hitching up the team before you start, and no more watering, feeding, and grooming at the end of the day. You just turn the key and drive away. Simple as that. So, before you snap up that new stallion you’ve been eyeing – consider a car instead.”

You’d probably feel like our Bavarian auto-marketers were out of touch with the times.  Certainly, there was a time when the main mission of the auto industry was replacement of horse-drawn conveyances, but there came a time when the automobile won, and marketers shifted their sights to more serious competition.

As of last week, FPGAs are still fighting full-tilt to steal market share from ASICs.  The problem is, there is almost nothing left to steal.  Sure, FPGAs have a bright high-growth future ahead of them, but it won’t come from luring away ASIC starts.  That battle has been long since won.

Let’s look at the trends.  First, ASIC design starts have been on a steady (and accelerating) decline for years.  This year, some forecasters estimate they will fall by over 20%.  There are a number of reasons for this decline, and most of them have nothing to do with FPGAs.  First, and most often cited, is the cost of developing a current-generation ASIC.  A conservative estimate for a 65nm ASIC project is $10 million USD.  If your only reason for building an ASIC is reduced unit cost, you’d better have a pretty big production volume planned to amortize that kind of expense.  If you’re building a million units, that’s $10 per chip in development costs – even if your silicon was free.  This creates a catch-22 of sorts.  Most of the systems that would support a seven-digit (or larger) production volume are some sort of consumer product.  On the other hand, most consumer devices are so cost-constrained that a $10 development cost on a single chip is too much BOM impact.  This isn’t an ASIC-or-FPGA story.  This is an ASIC-or-nothing story.  If your design requires an ASIC for performance or power reasons, there are many projects that simply become economically unfeasible and never start.

Second is the trend of integration.  During the golden age of ASIC – the 1980s and 1990s – we integrated like mad.  Our systems dwindled from four or five ASICs to two to one; then we pulled most of the rest of the BOM onto the chip as well.  Our original goal was the “system-on-chip,” and by the end of the 1990s, we had ostensibly reached that goal.  Of course, for a static number of systems, that means the number of ASICs required dropped by a factor of three or four right there.  The integration glut didn’t stop there, however.  Convergence took an additional toll.  As devices became more converged, the number of discrete widgets the average consumer carried around dropped.  Today, I’ve got an iPhone.  Therefore, I don’t have a handheld GPS, an MP3 player, a pocket audio recorder, a digital camera… the list goes on.  Those converged products once again require fewer ASICs than the a-la carte versions.

Next we have the added bonus of market consolidation.  When a new technology widget first hits the market, there are often dozens of companies competing for market share.  Most of these companies, however, have long since read Geoffrey Moore’s books on technology adoption, and when the market matures, the non-leaders quickly fall by the wayside.  They either get gobbled up by their more successful competitors or they wilt on the vine.  The net result, however, is that fewer companies are starting new ASIC projects to compete in these maturing and established markets.

The integration and convergence story continues, however.  Today, instead of developing a “System on Chip,” most companies are actually developing “Systems on Chip” devices.  In order to amortize the cost of ASIC development, a single chip supports a number of different products and product variants. The extreme version of this effect is the ASSP.  One company develops the ASIC for a whole bunch of companies’ products.  Hey, ASIC design starts – are you listening? Take two more steps backward.

Examining all these factors, it stands to reason that ASIC design starts were always doomed to drop – all on their own.  Even if the markets for all of the end systems are thriving and growing, the dynamics and economics of ASICs in the Moore’s Law world of exponentially increasing capability and exponentially increasing design cost dictated that ASIC design activity would eventually funnel down to a small number of extremely high-value projects.

Essentially, design-start fruit has been falling off the ASIC tree for about a decade, and FPGA companies have been walking around picking it up off the ground.  

Some estimates today put FPGA design starts at something like 50X those of ASIC.  FPGA starts continue to increase, and ASIC starts are accelerating in decline.  Why then, do we see FPGA marketers remaining so focused on stealing business away from ASIC?  Have they not noticed that this is a battle that is long-since won?  Do they not see the new, much more capable competition quietly approaching from behind?

Part of the problem is certainly inertia.  FPGA companies have run on the “ASIC replacement” platform for so long that it is difficult to escape that mentality and focus on anything else. They’re also hypnotized by the market size mirage.  The ASIC market (standard cell, gate array, and full-custom chips) is estimated at somewhere around 3x to 4x the revenue of the FPGA market.  That revenue differential does a good job keeping the marketing folks in the mind space of eating away at a larger competitor.  The problem is that most of that difference is made up of a few, very high volume applications – hard disk drives, video games – places where we aren’t likely to see an FPGA in the socket any time soon.  If one factors out these super-high-volume, probably-never-good-for-FPGA applications, the ASIC market size difference all but disappears.  

FPGAs won this battle on the back of a single concept – programmability.  Programmable hardware brought us flexibility in the face of changing standards, in-field upgrades, faster time to market, reduced design risk, dramatically lower design costs, and a host of other undeniable advantages.  FPGA marketers should check their rear-view mirrors, however, because a bigger and meaner version of their own weapon is bearing down on them, stealing sockets with the same kinds of arguments about software programmability that FPGA companies have been using against ASIC for years.  As standard embedded processors get faster, cheaper, and more power efficient, the number of interesting applications that can be addressed with off-the-shelf processors (or even boards or modules) is on the rise.  It seems the only thing better than faster, easier hardware design is not having to design hardware at all.  In a food-chain fiesta, FPGA is running down the road gnawing off the tail of ASIC while standard embedded computing platforms are following FPGA and feeding from its tail.  

FPGA companies are defending against this attack, of course, by equipping their devices with both hard- and soft-core processors so that they can reap the advantages of software programmability as well. The outcome of that game, however, will probably be determined by the existence of design requirements that mandate hardware programmability – features where software cannot deliver the performance or power efficiency required.  Designs with these sorts of requirements will remain in the sweet spot of FPGA, while general-purpose embedded platforms have a better-than-even chance of winning where software alone can do the job.

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Current Sense Shunts
Sponsored by Mouser Electronics and Bourns
In this episode of Chalk Talk, Amelia Dalton and Scott Carson from Bourns talk about the what, where and how of current sense shunts. They explore the benefits that current sense shunts bring to battery management and EV charging systems and investigate how Bourns is encouraging innovation in this arena.
Jan 23, 2024
9,354 views