feature article
Subscribe Now

The Changing Customizability Continuum

ASIC, ASSP, CSSP, FPGA, SoC, MPSoC, GPU, MPU, CPU

We spend a lot of time in the semiconductor business trying to put chips into bins. We in the press and those in the analyst community are constantly struggling to label particular semiconductor devices, and financial types are always trying to figure out what “market” a particular chip belongs in. As Moore’s Law has pushed us into higher and higher levels of integration, most of the interesting devices out there have a little bit of everything in them.

Consider, for example, the upcoming Zynq UltraScale+ devices recently announced by Xilinx. Even though Xilinx is an FPGA company, and even though a substantial amount of Zynq chip area is taken up by FPGA fabric, Xilinx does not call Zynq devices “FPGAs.” The company has bounced around various monikers over the years. (Do you remember “Extensible Processing Platforms”?) We refer to this category of devices as “Heterogeneous Integrated Processing Platforms (HIPPs).” Xilinx has recently fallen into calling them “MPSoCs” for “Multicore and Multiprocessor Systems on Chip”. (We don’t love that name because it seems like it would fit just about any SoC with more than one processor core on it, and it makes no reference to the presence of the FPGA fabric, which is the chip’s primary differentiator.) Altera has similar devices, which they refer to simply as “SoC FPGAs.” And, while that title wins in terms of elegant simplicity, it falls short when it comes to expressing the dramatic change in capability represented by this category of chip. 

Anyway, a Zynq UltraScale+ device (and likely also a next-generation Altera SoC FPGA) will contain multiple 64-bit CPUs, an MPU, a GPU, FPGA fabric, memory, sophisticated IOs, DSP resources, analog, digital – just about anything you can name except possibly RF and power circuitry. It’s no wonder we have a hard time classifying “all but the kitchen sink” devices like this.

Instead of looking at what’s inside, we should be thinking about what jobs a chip is intended for. If we look at a device’s intended application, that gives us a much more realistic view of the “market” than if we look at the kinds of transistors and the type of architecture inside the chip that lets it accomplish its task. In fact, looking at the “how” can be a dangerous distraction from the “what” – which is where the real competition happens in semiconductors. 

Let’s take a couple of examples, and let’s start with one of our favorites – the “FPGA” market. The term “FPGA” definitely refers to a chip architecture – a way of structuring the transistors on a chip – and not to any particular application. In fact, FPGAs can be used on an enormous range of applications – from communications and networking infrastructure to consumer to industrial automation to automotive to military/aerospace to computing. And each of those vertical markets, and each of the applications within them, might require a different kind of FPGA. Furthermore, in many of those markets and applications, FPGAs compete against other types of chips that provide the same or similar capabilities.

And the gamut of FPGA types is extreme – ranging from tiny, ultra-low-power mobile-oriented devices like those Lattice Semiconductor sells for pennies – to enormous, power-hungry, multi-billion-transistor behemoths like Xilinx Virtex and Altera Stratix that can sell for tens of thousands of dollars per copy. Of course, the tiny devices never compete with the large devices, and they mostly never even sell into the same vertical markets. The only thing they have in common is a particular similarity in the way the transistors are structured on the chip. And even that doesn’t stand up to much scrutiny if you’re trying to make a case that the two things are the same, or that they belong to the same “family” of semiconductors.

What this leads us to is the reality that there is no such thing as “the FPGA market.” FPGAs compete in a number of markets, and their competition is often other types of chips that are not FPGAs. For digital-signal-processing applications, for example, FPGAs compete with DSP processors. For video applications, FPGAs compete with a number of ASSPs. In many control applications, FPGAs compete with microcontrollers (MCUs), and now, in high-performance computing, FPGAs (and SoC FPGAs) find themselves competing against traditional CPUs and GPUs.

The biggest weapon FPGAs bring to each of those contests is configurability. When going up against an application-specific standard part, FPGAs would typically lose out on cost, power consumption, and/or performance. But throw in the configurability of the FPGA and the device can hit a much larger target. It may be able to handle applications that the ASSP cannot. In fact, in many applications, an FPGA is often used in conjunction with an ASSP – to morph the ASSP into a particular version of the application that it wasn’t quite designed to handle in the first place. And, if FPGAs are available with the right hardened IP inside, they can often then replace the ASSP altogether.

If we back up and defocus a bit more, paying attention to the concept of configurability, we can see a huge range in devices suited to various applications. For some jobs, you could do them with the most configurable solution of all – a conventional processor with the entire application in software. Or, you could do them with the least configurable solution – a custom-made ASIC. The performance, unit cost, power consumption, and form-factor attributes of each solution will vary widely. Typically, the less-configurable solution is outperformed by the more-configurable one because of functions that are realized in optimized hardware.

So, what we really need are solutions with both attributes – optimized hardware for the critical bits, and configurability to adapt to the widest possible set of situations and applications. Interestingly, QuickLogic was one of the first companies to identify this need and to act on it. Many years ago, the company coined a device category they call “Customer-Specific Standard Parts (CSSP),” which are basically ASSPs with some FPGA fabric built in for customer- and application-specific customization. Today, Lattice Semiconductor is following a similar trend with small PLDs optimized for particular application areas by virtue of the selection of hardened IP they have on chip.

At the other end of this spectrum are the new HIPPs/MPSoCs/SoC FPGAs. These devices take the maximum configurability of conventional processors and pair it with the hardware configurability of FPGA fabric. On the surface, these would seem to be the most configurable and universal devices of all – like some kind of “Swiss Army” chips. But, do they suffer the same shortcomings as Swiss Army knives – jack of all trades, and master of none? As long as the task is “computing,” probably not. By having a range of computing engines – from conventional applications processors to MCUs to GPUs to FPGA-based accelerators, these devices have the potential to optimally match the best kind of compute engine with each part of an application.

With the exponential increase in non-recurring engineering (NRE) cost to design a new chip with each new process node, the minimum volume required to justify the creation of a unique device is getting higher. So, in many cases, it won’t make economic sense to produce an ASSP for a highly focused application. But, if that ASSP had enough configurability to allow it to solve a wider range of problems, the cost of producing it would be amortized over a higher volume. 

That means we’ll probably be seeing more ASSPs with programmable logic fabric and processors on them, or we’ll be seeing more application-specific versions of FPGAs. The continuum of configurability will let us find the sweet spot between performance and flexibility.

Leave a Reply

featured blogs
Oct 9, 2024
Have you ever noticed that dogs tend to circle around a few times before they eventually take a weight off their minds?...

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
26,747 views