feature article
Subscribe Now

The End of ASSP

Or Time for New Terms?

FPGAs have their sights set on ASICs. This is nothing new. FPGA companies have actually been hell-bent on “ASIC replacement” for the past three decades. With every new generation of FPGA technology, it seems the claim is the same: “This time, we have a true ASIC replacement.” 

They’ve cried “wolf” so many times, we have all forgotten what a real wolf even looks like.

And, we have also collectively forgotten what an “ASIC” is. The term, of course, is an acronym for “Application Specific Integrated Circuit.” It hails from an era when there were basically two kinds of chips – “standard parts” and ASICs. Standard parts were, as the name implies, things designed for generic use – with no particular application in mind. Memory chips, logic gates (yep, they used to sell logic gates as stand-alone chips), interfaces – you could buy yourself a bunch of standard parts and stick ‘em on a board to build just about anything. 

If you wanted a greater degree of integration, cost savings, performance, and power efficiency, and you had yourself a big’ol (five figure, at the time) budget, you designed yourself an ASIC. That chip was, (again, as the name implies) specifically designed for your particular application. There were a number of ways of designing an ASIC – you could do anything from “full custom” (pushing polygons around in a Calma machine to produce detailed layout) to “standard cell” (assembling pre-designed blocks into a custom configuration) to “gate array” (where you supplied only the interconnect for a pre-designed array of unconnected logic gates.) 

Life was simple – chips were standard parts or ASICs – that was it.

Then, FPGAs came along and muddied the water. FPGAs are most definitely standard parts. They are created with (almost) no particular application in mind, and they can be used in a variety of systems. They are sold to the systems company in a state of completion – built, tested, packaged, ready to solder on the board. However, FPGAs clearly have ASIC-like properties. We have to do the equivalent of an ASIC design in order to program our FPGA to do what we want. In our system, it behaves like an ASIC – implementing a specific set of functions that are completely tailored to our application, with a custom pinout that is also particular to our application. Our terminology was no longer clean – we had standard parts, ASICs, and now, these weird FPGAs.

At the same time, Moore’s Law was wreaking havoc on our comfy little 2-bucket (or 3-bucket) world. As chips got larger, we integrated more stuff into them. ASICs stopped being aggregators for glue logic in our system and began to actually BE the system. The term SoC “system on chip” was born. We editors and those in the analyst community who pass the days pontificating from the pundit pulpits in our ivory cubicles barely noticed that our terms were being stretched and warped beyond repair. Clearly, an SoC was an ASIC, right? It was often the main chip in a system (often along with an FPGA to gather up all the stuff we forgot to put on our ASIC), and it clearly had an application-specific function. From that perspective, it was clearly an ASIC. However, some SoC functions became so common that economy-of-scale made it sensible for a third party to design and distribute devices to a number of companies designing the same or similar things. In that context, we had an ASIC SoC that was sold and distributed like a standard part.

Thus was born the “ASSP,” or application-specific standard part.

The horrors! One morning, everything was fine. Then, we editors and analysts went out for one of our usual three-martini lunches, and when we came back… BAM, there were these FPGAs and ASSPs and SoCs flying around and we no longer had any idea what to call anything. So, we did what any sensible professional would do – we entered a phase of utter denial. We just kept calling things whatever name popped into our heads at the time. This thing Qualcomm makes that goes into a zillion cell phones had to be an “ASSP.” This thing that Apple makes must be an “ASIC.” Both of them were “SoCs.” This other chip that Freescale produces is also an “SoC,” but it feels more like a “standard part.” If you looked at ASSPs, standard parts, and ASICs – you often couldn’t make any meaningful distinction between them. They all had stuff like ARM processor cores, a bunch of peripherals stitched together on a bus, memory, interfaces, probably some analog (at least ADCs and DACs) – they were all the friggin’ same!

We wrung our hands, we drank more martinis (it didn’t help), we pondered the problem. Then, we went right back into denial. 

And the FPGA companies kept telling us they wanted to “replace ASICs.” Crap, we can’t even agree what an ASIC is anymore. How will we know if they succeed? We just kept on publishing nice 3D charts showing the number of ASIC designs starts trending smoothly down, the NRE (non-recurring engineering expenses associated with designing a chip) rocketing up into the bazillions of dollars, and we took the easy but completely invalid approach of trusting the trend lines. Yep, ASICs were going away, the FPGA market was growing, and if we followed those lines out until five or fifteen years out, all chips would definitely be FPGAs.

In the real world, however, FPGAs were making their own definitions. If you were designing a system where the amortized NRE from designing an ASIC put your real BOM cost for that ASIC at more than an FPGA, and if the FPGA could actually solve your technical problem, you didn’t design an ASIC. Large systems companies began to convert more and more of their ASIC design operations into FPGA design operations. “See?” said the FPGA companies. “We are replacing ASICs.”

Of course, once ten or twelve companies had used FPGAs to do what an ASIC would have done, someone would notice an opportunity. Even though the economy-of-scale wasn’t there for one company to do an ASIC design exclusively for their own use, it did make economic sense for a third party to create an ASSP to sell to all those companies that were using FPGA solutions. The ASSP version was smaller, faster, cheaper, and used less power than the FPGA. It was already designed and tested. It was a slam-dunk.

This created a situation where FPGAs were used as the canaries in the mines for new standards and new technology deployments. If you were doing a “new thing,” you had to use FPGAs. Once that new thing became “normal,” somebody would design and market an ASSP, and you’d do a cost-reduction spin of your system taking advantage of the ASSP. Meanwhile, you were busy creating the next-generation version – again, with FPGAs. FPGA lived in the market gap between the beginning of a “new thing” and the arrival of the ASSPs for that thing. FPGAs could hang on as long as their performance, power, and cost were sufficiently low to discourage ASSP development. Skyrocketing NRE helped FPGAs as well – as the risk-reward margin for ASSP companies grew progressively uglier and the minimum break-even production volume trended steadily upward. FPGA companies were no longer replacing ASICs, they were delaying and eliminating the arrival of ASSPs.

If one plots these trend lines, one reaches a point where it might not make sense for anyone to design an ASSP for anything that can be technically accomplished with an FPGA. People would do the initial development with FPGAs, and then, before there was sufficient economic benefit for anyone to justify ASSP development, that system would be obsolete and everybody would be off to the next thing with FPGAs. FPGAs went hunting for ASICs and may have bagged ASSPs instead.

Of course, now, the world is once again getting more complicated. As we’ve discussed many times on these pages, FPGAs are becoming SoCs as well, with multi-core processing subsystems, peripherals, memory, interfaces – oh, and some FPGA fabric as well. In reality, from one perspective, we already can choose between SoCs with FPGA fabric and SoCs with no FPGA fabric. This new FPGA SoC is well-equipped to be the core of many systems with a processing subsystem that allows functionality to be customized and differentiation created in software, and FPGA fabric that allows custom hardware, accelerators, and interfaces to be created on the same chip. This FPGA SoC (like all FPGAs) is itself a standard part.

Have we come full circle?

Actually, no. As the FPGA business has become more competitive, FPGA companies have started to make even FPGAs somewhat application specific. If you’re doing a big DSP-intensive project, you can get FPGAs with more DSP resources. If you have special SerDes IO requirements, you can get FPGAs that cater to that. If you are building prototyping boards, you can get FPGAs that have little more than a giant array of LUTs. As application demands get stronger, FPGAs will be tailored more and more to specific applications.

So, the modern FPGA may be considered a standard part, an SoC, an ASIC, and an ASSP all at once. Confused yet? Yeah, so are we.

6 thoughts on “The End of ASSP”

  1. Pingback: GVK BIO
  2. Pingback: Judi Bola 99
  3. Pingback: wiet
  4. Pingback: Aws Alkhazraji

Leave a Reply

featured blogs
Apr 25, 2024
Structures in Allegro X layout editors let you create reusable building blocks for your PCBs, saving you time and ensuring consistency. What are Structures? Structures are pre-defined groups of design objects, such as vias, connecting lines (clines), and shapes. You can combi...
Apr 25, 2024
See how the UCIe protocol creates multi-die chips by connecting chiplets from different vendors and nodes, and learn about the role of IP and specifications.The post Want to Mix and Match Dies in a Single Package? UCIe Can Get You There appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Achieving Reliable Wireless IoT
Wireless connectivity is one of the most important aspects of any IoT design. In this episode of Chalk Talk, Amelia Dalton and Brandon Oakes from CEL discuss the best practices for achieving reliable wireless connectivity for IoT. They examine the challenges of IoT wireless connectivity, the factors engineers should keep in mind when choosing a wireless solution, and how you can utilize CEL wireless connectivity technologies in your next design.
Nov 28, 2023
20,138 views