feature article
Subscribe Now

The Last Silicon Standing

Will FPGAs be the Cockroaches of the Moore's Law Apocalypse?

We’ve heard it so often, we don’t even hear it anymore.

Every process node is twice as expensive as the last.  The non-recurring engineering (NRE) associated with designing a new digital semiconductor chip has been increasing exponentially right along with the transistor capacity.  Fewer and fewer companies can afford to do a custom chip.  The minimum volume at which one can expect to recoup the NRE is increasing every year.  ASIC design starts are dropping precipitously.  ASSP designs are also on the decline.

If you skipped that last paragraph, you didn’t miss anything new.

It is only common sense that no exponential can be sustained forever, and Moore’s Law has had almost a 45-year run.  While rumors of its demise have been thus far exaggerated, we all know the end is coming.  We see the indicators – industry analysts and engineers alike wandering down cubicle hallways with sandwich signs reading “The End is Nigh!” and babbling nonsense about impassible physical limits, economic barriers, and global-scale changes in the industry.

The Moore’s Law Apocalypse won’t be the end of semiconductors, however.  As Yogi Berra said “Nobody goes there anymore.  It’s too crowded.”   We will continue to make more and cheaper semiconductors than we ever have, but there is likely to be a drastic narrowing of the types of semiconductors that are manufactured.

If we trace out two key trends of Moore’s Law, we can make a pretty reasonable prediction about the outcome.  First, with increasing density and decreasing unit cost, gates become almost free.  We can put just about as much stuff on a chip as we want, and the incremental cost of more stuff is very low.  Second, when that trend combines with exponentially increasing NRE, the price of an individual chip becomes mostly a factor of amortizing the development cost.  Production cost dwindles away toward zero.  This makes the economics of hardware much closer to the economics of software.  Once you have something that works, making and distributing production copies is nearly free.  The initial development cost is the barrier.  

What we’d like, then, way down at the end of the line, is to do one chip design that would handle everything.  We have one giant NRE, zero unit cost, and enormous design risk.  We want to do it once and do it right.

Processors will definitely survive the apocalypse.  Since processors are the universal hardware, and most product differentiation these days is done in software, there is no future digital semiconductor that doesn’t include a processor (or ten).  Since, as we mentioned, the production cost of adding more stuff to your chip is near zero, most devices will probably have multiple processors.  Then, to support the multi-core processing environment we just created, we will also have all the usual peripherals and some nice, standardized interconnect on our chip as well.  

Memory will survive, of course.  But we’ll be seeing more and more of it on the SoC we just discussed.  After all, if you can add more stuff to your chip for free, you’ll probably use up the blank space with as much memory as you can fit.  The more memory we have on our SoC, the less we need to connect up on the board.  If we can fit all we need on the chip and avoid off-chip memory IO altogether – so much the better.  

We’ll need a lot of IO.  With the commoditization of high-bandwidth, high-speed serial I/O, we may see a consolidation in the types of off-chip IO used.  To get a picture, look at the effect of USB in the computer world.  We went from a wide range of competing device interconnect standards to almost a single standard in the span of a decade.  Consolidated, high-bandwidth IO standards would solve a lot of problems in our superchip design.  First, it could well be the IO, rather than the core, that determines die size.  Second, the cost of packaging and mounting will not drop as precipitously as the cost of transistors.  So, a smaller number of IO standards gives us a big advantage in our ultimate semiconductor.

Finally, we’ll need something else.  While it’s likely that most future product differentiation will be in software, there will be a nagging fraction that requires some custom hardware design.  Basically, anything that’s too computationally demanding or too power-inefficient to do in software on our multiplicity of processors will need to be dropped into a piece of custom hardware.  Since we’ve already established that we’re not going to be designing a new chip, that custom hardware will most likely be implemented in some FPGA fabric.  FPGA fabric will give the hardware programmability to complement the software programmability.  Properly plumbed into our SoC, it will be the final element in a chip that can do anything.  

To approach this from another angle – if you were told that you got only one chip – for the rest of your career – and you had to make a chip that would do everything you think might need to be done, what would you put on that chip?  Probably something like we’ve described above.  

There will be other elements to successful product design, of course.  We’ll still need some analog and RF stuff, and there will be sensing and UI functions that require MEMS and other non-digital technologies.  It is likely that it won’t be cost-effective to integrate those onto custom digital devices, however, because the product-specific nature of those technologies will put the resulting device out of the range of economic feasibility.

To consider a microcosm of this effect – look at the iPhone.  Thousands of applications that would have required discrete hardware designs in the past – applications that use GPS, digital cameras, touch-screen displays, audio, accelerometers, and lots of processing power, have all been consolidated onto one piece of hardware.  Instead of going out and buying a $100 GPS, you buy a 99-cent app.  Video camera?  Free app.  The list goes on and on.  Hardware differentiation has been replaced with software differentiation.  The cost of manufacturing and selling a custom device can’t compete with the efficiency of creating the function in software using a standard platform.

Our superchip is already taking shape, of course.  We see devices coming from all directions headed to basically the same destination.  Last month, Xilinx announced a future product, which is an ARM-based computing subsystem surrounded by memory and FPGA fabric.  Actel has been manufacturing Fusion FPGAs with embedded ARM processors and on-chip analog for awhile now.  Cypress PSoC is achieving the same thing from the MCU side.  Other companies are creating FPGA fabric IP blocks that can be placed on other SoCs.  Processor companies, FPGA companies, MCU companies, and others are all evolving their architectures toward a similar vanishing point. 

Different applications will still require different power and form-factor profiles, of course.  Our future superchip is likely to be available from several vendors who will be battling for bragging rights for the slickest processing architecture, the most throughput per power, and the most universal packaging.  Vendors will supply competing development kits and tools that allow you to quickly customize the software and hardware on the superchip for your particular application.  To get a glimpse of what this might look like, think of Altium’s approach with their Altium Designer line of tools.  Custom silicon is not part of the equation.  Products are made of software, customized hardware in FPGA fabric, and application-appropriate form factors and user interfaces.

There’s a brave new future waiting for us out there, and it looks to us like FPGA technology will always be a part of it. 

Leave a Reply

featured blogs
Oct 24, 2024
This blog describes how much memory WiFi IoT devices actually need, and how our SiWx917M Wi-Fi 6 SoCs respond to IoT developers' call for more memory....
Nov 1, 2024
Self-forming mesh networking capability is a fundamental requirement for the Firefly project, but Arduino drivers don't exist (sad face)...

featured chalk talk

Outgassing: The Hidden Danger in Harsh Environments
In this episode of Chalk Talk, Amelia Dalton and Scott Miller from Cinch Connectivity chat about the what, where, and how of outgassing in space applications. They explore a variety of issues that can be caused by outgassing in these applications and how you can mitigate outgassing in space applications with Cinch Connectivity interconnect solutions. 
May 7, 2024
39,002 views