feature article
Subscribe Now

FPGAs and the IC Bubble

The Techonomics of Programmability

Exponentials are exciting!

Anything in the real world that follows an exponential curve is a recipe for increased adrenalin production.  If we’re bopping along in our normal linear lives, and we bump into a geometric progression, we (those of us that took math, anyway) naturally expect that we’re in for a short and exciting ride.  Something that happens in twos or fours today will be exploding into the 128s and 256s by the end of the week, and next month will be flaming out in the bazillions.  Although these events can have huge amplitudes, their short duration typically prevents the integral from amounting to much, and their lasting effect is minimal.

What the heck was that last paragraph talking about?

Let’s come back from the arena of abstract arithmetic for a bit and drop into the real world.  Your e-mail box catches a less-than-funny forward from one of those “forwarding friends,” (the type that sends you about twelve uninteresting e-things each day – ranging from virus alerts to chain letters to pictures of political candidates with farm animals photoshopped to their heads.)  If you’re early in the wave, you may see the e-joke only once this week.  Next week, however, you’ll get three copies – the week after, maybe sixty – and the week after that they’ll fill your spam bucket as the exponential explosion of forwards gets the joke to every man, woman, and child in the world with more bandwidth than reading time.  By the fourth week, the joke is gone completely, flamed out in a fiery flash of fuel deprivation.  The world – largely unchanged from the event.

Occasionally, however, we get a system of balanced exponentials that somehow keep each other in check.  Remember in Calculus when you were doing those limits with something infinite in both the numerator and the denominator, and your job was to figure out which infinity would win so you’d know if the real answer was zero, or one, or… infinity again?

OK, sorry, we accidentally reverted into math speak.  Back to the real world – it won’t happen again.

Moore’s Law is an exponential that’s been running unchecked in our economy and technology spheres for over forty years, without flaming out.  The forty year run is made possible by a number of exponentials that somehow, elegantly, balance each other – allowing us to absorb the changes to our lives, our economy, and our technology over the span of an adult lifetime.  Since the 1960s, we’ve seen a constant geometric expansion in the number of transistors we can drop onto a square of silicon that has resulted in exponential increases in computing power, bandwidth, and integration – balanced by corresponding decreases in per-item power consumption, cost, and size.  We’ve all watched the price of a personal computer held miraculously in a techonomic stasis for over two decades – where the typical consumer can buy a state-of-the-art machine for almost exactly the same price, drawing almost exactly the same amount of power, and consuming almost exactly the same desktop real-estate, while the performance and capability of the machine increase exponentially – only to be further held in-check by the exponential increase in software complexity that seems to result in each generation of operating system requiring the same, annoying 2 minutes to boot on a 3GHz, 64-bit machine as it did on a 3Mhz, 8-bit one.

Those of us in the IC industry have had front-row seats to the interstices of this seemingly continuous flow.  If we’ve been around long enough, we remember constructing complex (at the time) digital circuits from enormous quantities of TTL chips and seriously patting ourselves on the back if our fantastic state-machine optimization could chop 20 chips off the board.  (Remember the Integrated Woz Machine, anyone?)  Then, the 1980s brought us the ASIC revolution.  We could shave boatloads of chips off our BOMs by designing a few well-placed ASICs.  Large-scale integration was the new rage, and we all started trading our IC Masters for LSI Logic databooks.  Money, talent, and technology poured into the field of ASIC design.  The EDA industry expanded, taking up their long-term role as purveyors of productivity for the ASIC design community and balancing their own set of offsetting exponentials – increased design complexity, shrinking time-to-market, rising compute power, and falling clock periods.  EDA marketers’ first PowerPoint slide always had a graph of Moore’s Law – showing the exponential increase in the number of transistors on a device.  Below the graph of Moore’s Law was another line showing the linear increase in engineering productivity – the number of gates per day that a typical ASIC designer could achieve.  The area between those two lines was known as “the gap,” and this was where the marketers’ new – whatever – tool would save the day, increasing our productivity to fill “the gap.”

With each generation of ASIC, the amount of integration increased.  Designs went from a hundred chips to ten chips to three chips to one.  Designs with one chip continued the integration glut with “convergence,” where multiple legacy systems were replaced by one converged device.  Our GPS, digital camera, music player, telephone, movie viewer, internet browser, and much more were removed from our personal gear bags and replaced by a single smartphone.  This final stage of convergence caused a bursting of the ASIC design bubble.  ASIC designs began to decline sharply.  

Many in the industry bemoaned the ASIC design start slide, blaming it on the increasing cost of each new process generation.  Cost aside, however, with all our progress on integration and convergence, we were designing ourselves out of a job.  When one ASIC in one converged device became able to replace twenty or thirty separate devices, lots of potential ASIC designs just vanished in a puff of smoke.  When you look through the thousands of applications for the iPhone, consider how many would have required a custom ASIC design to productize even a few years ago.  Programmability stepped in to fill “the gap” where tools to improve hardware design productivity had fallen short.

Today, designing custom chips is an ultra-expensive, high-stakes game that few companies have the resources or resolve to continue.  Like the plot of a bad reality show, the field of competitors has gradually narrowed from a wide and diverse pack to a few select, well-funded, and brave teams that can make IC design into a profitable business. Unfortunately, as the number of viable players has decreased, the infrastructure that supports custom IC development has begun to show signs of crumbling.  The number of companies capable of operating viable fabs in the world continues to decrease.  The number of ASIC design teams available to support the EDA industry remains in decline, and, with the economic pressures of today, more of the essential IC infrastructure seems poised to fail.

The FPGA industry has offered respite from these pressures.  With current FPGA platforms at the 40/45nm node offering astounding capabilities at unbelievably low price points, FPGAs are truly a viable replacement for ASICs in many applications.  With FPGAs now truly delivering the promise of hardware- and software-programmable systems on chip, we can build systems that would previously have required prohibitively expensive ASIC design projects right on our desktops with minuscule investment in software tools and development hardware.  However, FPGAs are not immune to the impact of the ASIC bubble.  FPGA companies are dependent on much of the ASIC infrastructure to continue their own progress down Moore’s road.  In the pathological case where one super-chip can satisfy all the world’s technology needs, the entire ecosystem of chip design would be balanced on the head of that one pin. While we head down the road of technological progress in that general direction, we should be wary of that pin popping yet another bubble.  

As we said, exponentials are exciting.

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

featured video

AI SoC Chats: Protecting Data with Security IP

Sponsored by Synopsys

Understand the threat profiles and security trends for AI SoC applications, including how laws and regulations are changing to protect the private information and data of users. Secure boot, secure debug, and secure communication for neural network engines is critical. Learn how DesignWare Security IP and Hardware Root of Trust can help designers create a secure enclave on the SoC and update software remotely.

Click here for more information about Security IP

featured paper

Streamlining functional safety certification in automotive and industrial

Sponsored by Texas Instruments

Functional safety design takes rigor, documentation and time to get it right. Whether you’re designing for the factory floor or cars on the highway, this white paper explains how TI is making it easier for you to find and use its integrated circuits (ICs) in your functional safety designs.

Click here to download the whitepaper

Featured Chalk Talk

Mom, I Have a Digital Twin? Now You Tell Me?

Sponsored by Cadence Design Systems

Today, one engineer’s “system” is another engineer’s “component.” The complexity of system-level design has skyrocketed with the new wave of intelligent systems. In this world, optimizing electronic system designs requires digital twins, shifting left, virtual platforms, and emulation to sort everything out. In this episode of Chalk Talk, Amelia Dalton chats with Frank Schirrmeister of Cadence Design Systems about system-level optimization.

Click here for more information