feature article
Subscribe Now

Hardware Innovation is Dead

Long Live Software Innovation!

Back in the days of the dot-com boom, I used to go to processor conferences several times a year.  Every one of these events was packed full of wild and wacky hardware innovations.  Nobody had more ideas than FPGA designers.  In a remarkably short period of time, FPGAs were transformed from generic “sea of gates” devices into complex SoCs.  They added more and more hardwired features: memory, DSP blocks, high-speed I/O, and even processor cores.

As if this weren’t enough, FPGAs made dramatic improvements in power and cost.  Some of these advances came from architectural improvements, but process technology also gets a credit here.  FPGAs were once a generation or two behind the latest fabrication process, but now they are some of the first devices to come out in the leading edge process.

Sadly, I think those heady days of hardware innovation are behind us.  I have no doubt that FPGAs will continue to improve, but I believe the coming advances will be incremental rather than revolutionary.  It’s not that there are no good ideas left—I’m sure there are many novel FPGA architectures waiting in the wings. No, the problem is that existing architectures have developed a huge momentum that is difficult to overcome.

Let’s look at things from the perspective of a hypothetical FPGA startup.  The latest offerings from Xilinx and Altera are fabricated in a 40nm process, so ideally the startup would like to use this same process.  There’s just one small problem:  The cost of building a chip goes up exponentially as you move to smaller geometries.  Mask costs alone run about $1 million at 65nm; moving to 40nm will set will set you back $2 million.   

$2 million is nothing to sneeze at, but even that number is only a small fraction of the overall development costs.  For example, when Altera designs a new FPGA family, it starts by running a series of test chips.  One chip might try out SerDes transceivers, another might try out LUT fabrics, and so on.   Altera typically runs about 10 of these test chips, so it has to spend roughly $20 million on mask sets alone.  That’s a lot of dough!

Faced with these costs, our startup is likely to choose a process that is a generation behind the industry leaders.   As a result, our startup’s architectural advantages are going to be offset by fabrication disadvantages, making it hard for our innovative architecture to compete.  (But it can happen!  For example, startup Achronix has 90nm parts that run at 1.5 GHz, much faster than the 600 MHz Altera gets from its 40nm parts.)

Let’s suppose that our little startup manages to overcome this challenge and rolls out hardware that beats the pants off the market leaders.  Success!  But before you break out the champagne, there is another problem: what about development tools, IP libraries, reference designs, etc.?  Xilinx and Altera have massive, decades-long investments in these areas.  Given our startup’s late entry and limited resources, how is it going to catch up?

On top of all of this we have an installed base problem.  Few designs start life from a blank screen.  Instead, most new designs incorporate large chunks of existing designs.  That’s a big hurdle for our startup.  Those existing designs are most likely built for a Xilinx or Altera FPGA, so the easiest path forward is to stick with the old hardware vendor.  Plus, developers are comfortable with the mainstream tools and don’t want to learn a new toolset.

So far our discussion has centered on startups vs. the big players, but the same constraints hold back hardware innovation at Xilinx and Altera.   Suppose Altera hits on a completely new FPGA architecture.  The company couldn’t just crank out chips and call it a day.  Just like our hypothetical startup, Altera would have to create tools, IP libraries, etc. to support the new design.   Altera would be able to leverage its existing resources—something our startup can’t do—but it would still have a lot of work to do.   For example, Altera’s IP libraries are designed around specific hardware features such as the hard-wired DSP blocks.  These libraries would need to be heavily re-written for the new hardware architecture.  The size of this effort makes it hard for even the big guys like Altera to roll out new designs.

So is FPGA innovation dead?  Not at all—it is merely moving from the hardware domain to the software domain.  In particular, I see a bright future for innovative development tools and for the use of processor cores within FPGAs. 

Let’s start with development tools.  The difficulty of the design process is the number one bugaboo holding FPGAs back from wider adoption.   I’ve seen figures showing that FPGA design takes 2-5 times as many engineering hours as software design.  One obvious way to narrow this gap is to make FPGA development look more like software development.  That’s why I’m enthusiastic about C-to-FPGA design methodologies like Mentor Graphics’s Catapult C Synthesis and Impulse Accelerated Technologies’s Impulse CoDeveloper.  These tools take C code and generate correct-by-construction RTL.  The tools are far from idiot-proof, but they do make it a heck of a lot easier to churn out an FPGA design.

But why stop there?  If software is so much easier, why not just drop a processor into the FPGA?  Of course, I’m not the first person to think of this idea.  As I mentioned earlier, FPGAs have already sprouted hardwired processors—examples include the PowerPC in select Virtex parts and the AVR microcontroller in the Atmel FPSLIC.  If these chips don’t work for you, you can always take your favorite device and drop in a soft processor like the Xilinx MicroBlaze, Altera Nios II, or Lattice LatticeMico32.

I think soft processors are going to play a huge role in the future of FPGA design.  Most designs need such things as state machines and supervisory control functions.  Why bother building this logic yourself when you can just grab one of these processors?   Not only do these processors give you pre-built logic, they also give you access to pre-built software.  For example, these soft processors typically allow you to run Linux, letting you take advantage of a huge supply of open-source software.  With this benefit in mind, I am particularly optimistic about the future of the ARM Cortex-M1, a relatively new ARM variant designed specifically for FPGAs.   The ARM architecture is the most popular, best-supported embedded processor in the world.  Drop a Cortex-M1 into your FPGA, and you suddenly have access to a vast universe of software.  Nice!

This brings me to a big question: If software development is so great, why not forget about the FPGA fabric altogether?  Why not just throw a bunch of cores into a chip, surround it with I/O, and call it a day?  Once again, I’m not the first person to think of this.  You can buy “manycore” chips that fit this description from vendors like picoChip (which specializes in femtocells), Netronome (network processors), and Tilera (networking, base stations, and video). Given the advantages of software design, why haven’t these chips crushed FPGAs?  One reason is that these chips are typically designed for very narrow markets.  That means you’re usually stuck with narrowly-targeted instruction sets, limited IO options, and so on.  Another problem with these chips is that they typically come in a limited number of flavors—you don’t get many options in terms of device performance, cost, or power.  Finally, these chips generally come from small startups.  Tying yourself to a startup is risky in any economy, but is especially nerve-wracking during a downturn.

If there is a rule-breaker, it’s likely to be NVIDIA’s CUDA chips.  These chips have two big advantages over the other options.  First, there are dozens of CUDA chips on the market, so you can choose the level of performance, cost, and power that’s right for you.  Second, NVIDIA is a big, established company—it’s not going to disappear overnight.  The fact that NVIDIA has a big established base also gives it a leg up in areas such as tools, number of developers, and available applications.  Of course, there are downsides to CUDA, such as the fact that it offers a very limited set of IO.

The point to all this rambling is that we could halt hardware innovation today and still get big improvements from software innovation.   These improvements include easier, faster, and cheaper development cycles, as well as expanded use of FPGAs and “manycore” processors.  Although I’ll miss the days of rapid hardware evolution, I am confident that many exciting developments lie ahead of us.  Now go out there and make it happen!

Leave a Reply

featured blogs
Jun 20, 2018
https://youtu.be/-fwwmFE6yUs Coming from Automotiv Elektronik Kongress (camera Robert Schweiger) Monday: Embargoed Tuesday: DAC day 1 Wednesday: DAC day 2 Thursday: DAC day 3 Friday: Embargoed www.breakfastbytes.com Sign up for Sunday Brunch, the weekly Breakfast Bytes email....
Jun 19, 2018
Blockchain. Cryptocurrencies. Bitcoin mining. Algorithmic trading. These disruptive technologies depend on high-performance computing platforms for reliable execution. The financial services industry invests billions in hardware to support the near real-time services consumer...
Jun 7, 2018
If integrating an embedded FPGA (eFPGA) into your ASIC or SoC design strikes you as odd, it shouldn'€™t. ICs have been absorbing almost every component on a circuit board for decades, starting with transistors, resistors, and capacitors '€” then progressing to gates, ALUs...
May 24, 2018
Amazon has apparently had an Echo hiccup of the sort that would give customers bad dreams. It sent a random conversation to a random contact. A couple had installed numerous Alexa-enabled devices in the home. At some point, they had a conversation '€“ as couples are wont to...
Apr 27, 2018
A sound constraint management design process helps to foster a correct-by-design approach, reduces time-to-market, and ultimately optimizes the design process'€”eliminating the undefined, error-prone methods of the past. Here are five questions to ask......