feature article
Subscribe Now

The Last Silicon Standing

Will FPGAs be the Cockroaches of the Moore's Law Apocalypse?

We’ve heard it so often, we don’t even hear it anymore.

Every process node is twice as expensive as the last.  The non-recurring engineering (NRE) associated with designing a new digital semiconductor chip has been increasing exponentially right along with the transistor capacity.  Fewer and fewer companies can afford to do a custom chip.  The minimum volume at which one can expect to recoup the NRE is increasing every year.  ASIC design starts are dropping precipitously.  ASSP designs are also on the decline.

If you skipped that last paragraph, you didn’t miss anything new.

It is only common sense that no exponential can be sustained forever, and Moore’s Law has had almost a 45-year run.  While rumors of its demise have been thus far exaggerated, we all know the end is coming.  We see the indicators – industry analysts and engineers alike wandering down cubicle hallways with sandwich signs reading “The End is Nigh!” and babbling nonsense about impassible physical limits, economic barriers, and global-scale changes in the industry.

The Moore’s Law Apocalypse won’t be the end of semiconductors, however.  As Yogi Berra said “Nobody goes there anymore.  It’s too crowded.”   We will continue to make more and cheaper semiconductors than we ever have, but there is likely to be a drastic narrowing of the types of semiconductors that are manufactured.

If we trace out two key trends of Moore’s Law, we can make a pretty reasonable prediction about the outcome.  First, with increasing density and decreasing unit cost, gates become almost free.  We can put just about as much stuff on a chip as we want, and the incremental cost of more stuff is very low.  Second, when that trend combines with exponentially increasing NRE, the price of an individual chip becomes mostly a factor of amortizing the development cost.  Production cost dwindles away toward zero.  This makes the economics of hardware much closer to the economics of software.  Once you have something that works, making and distributing production copies is nearly free.  The initial development cost is the barrier.  

What we’d like, then, way down at the end of the line, is to do one chip design that would handle everything.  We have one giant NRE, zero unit cost, and enormous design risk.  We want to do it once and do it right.

Processors will definitely survive the apocalypse.  Since processors are the universal hardware, and most product differentiation these days is done in software, there is no future digital semiconductor that doesn’t include a processor (or ten).  Since, as we mentioned, the production cost of adding more stuff to your chip is near zero, most devices will probably have multiple processors.  Then, to support the multi-core processing environment we just created, we will also have all the usual peripherals and some nice, standardized interconnect on our chip as well.  

Memory will survive, of course.  But we’ll be seeing more and more of it on the SoC we just discussed.  After all, if you can add more stuff to your chip for free, you’ll probably use up the blank space with as much memory as you can fit.  The more memory we have on our SoC, the less we need to connect up on the board.  If we can fit all we need on the chip and avoid off-chip memory IO altogether – so much the better.  

We’ll need a lot of IO.  With the commoditization of high-bandwidth, high-speed serial I/O, we may see a consolidation in the types of off-chip IO used.  To get a picture, look at the effect of USB in the computer world.  We went from a wide range of competing device interconnect standards to almost a single standard in the span of a decade.  Consolidated, high-bandwidth IO standards would solve a lot of problems in our superchip design.  First, it could well be the IO, rather than the core, that determines die size.  Second, the cost of packaging and mounting will not drop as precipitously as the cost of transistors.  So, a smaller number of IO standards gives us a big advantage in our ultimate semiconductor.

Finally, we’ll need something else.  While it’s likely that most future product differentiation will be in software, there will be a nagging fraction that requires some custom hardware design.  Basically, anything that’s too computationally demanding or too power-inefficient to do in software on our multiplicity of processors will need to be dropped into a piece of custom hardware.  Since we’ve already established that we’re not going to be designing a new chip, that custom hardware will most likely be implemented in some FPGA fabric.  FPGA fabric will give the hardware programmability to complement the software programmability.  Properly plumbed into our SoC, it will be the final element in a chip that can do anything.  

To approach this from another angle – if you were told that you got only one chip – for the rest of your career – and you had to make a chip that would do everything you think might need to be done, what would you put on that chip?  Probably something like we’ve described above.  

There will be other elements to successful product design, of course.  We’ll still need some analog and RF stuff, and there will be sensing and UI functions that require MEMS and other non-digital technologies.  It is likely that it won’t be cost-effective to integrate those onto custom digital devices, however, because the product-specific nature of those technologies will put the resulting device out of the range of economic feasibility.

To consider a microcosm of this effect – look at the iPhone.  Thousands of applications that would have required discrete hardware designs in the past – applications that use GPS, digital cameras, touch-screen displays, audio, accelerometers, and lots of processing power, have all been consolidated onto one piece of hardware.  Instead of going out and buying a $100 GPS, you buy a 99-cent app.  Video camera?  Free app.  The list goes on and on.  Hardware differentiation has been replaced with software differentiation.  The cost of manufacturing and selling a custom device can’t compete with the efficiency of creating the function in software using a standard platform.

Our superchip is already taking shape, of course.  We see devices coming from all directions headed to basically the same destination.  Last month, Xilinx announced a future product, which is an ARM-based computing subsystem surrounded by memory and FPGA fabric.  Actel has been manufacturing Fusion FPGAs with embedded ARM processors and on-chip analog for awhile now.  Cypress PSoC is achieving the same thing from the MCU side.  Other companies are creating FPGA fabric IP blocks that can be placed on other SoCs.  Processor companies, FPGA companies, MCU companies, and others are all evolving their architectures toward a similar vanishing point. 

Different applications will still require different power and form-factor profiles, of course.  Our future superchip is likely to be available from several vendors who will be battling for bragging rights for the slickest processing architecture, the most throughput per power, and the most universal packaging.  Vendors will supply competing development kits and tools that allow you to quickly customize the software and hardware on the superchip for your particular application.  To get a glimpse of what this might look like, think of Altium’s approach with their Altium Designer line of tools.  Custom silicon is not part of the equation.  Products are made of software, customized hardware in FPGA fabric, and application-appropriate form factors and user interfaces.

There’s a brave new future waiting for us out there, and it looks to us like FPGA technology will always be a part of it. 

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Electromagnetic Compatibility (EMC) Gasket Design Considerations
Electromagnetic interference can cause a variety of costly issues and can be avoided with a robust EMI shielding solution. In this episode of Chalk Talk, Amelia Dalton chats with Sam Robinson from TE Connectivity about the role that EMC gaskets play in EMI shielding, how compression can affect EMI shielding, and how TE Connectivity can help you solve your EMI shielding needs in your next design.
Aug 30, 2023
25,984 views