feature article
Subscribe Now

DAC, Death, and the Future

Who Will Design Your SoC?

This year’s Design Automation Conference in San Diego was, as has become tradition, kicked off by Gary Smith’s presentation on the state of the EDA industry and EDA’s role in the design of the coming generations of electronic devices.  As usual, the presentation was a mix of data collected from numerous industry sources, interesting charts and graphs projecting market behaviors into the distant future, and educated analysis mixed with wild speculation on the complex dynamics of technology and money that is the semiconductor industry.

The next day, Cadence hosted a panel on the challenges of 20nm design, where there were discussions of the staggering issues that will be faced by design teams creating semiconductor devices at that node.  The combination of technical challenges for delivering on the next round or two of Moore’s law is daunting indeed, and design at those tiny geometries will be an activity exclusively reserved for the most knowledgeable and best funded design teams in the world.

According to the combination of conventional wisdom floating around at this year’s DAC, designing an SoC on the current leading-edge processes requires a team of 100 to 200 engineers and a budget of somewhere in the range $25M-$50M.  That’s for a single chip design.  If you do some quick math on that, there are several things we can infer.  First, if you assume that a fully-burdened engineer costs a company an average of $200,000 per year (counting salary, benefits, office, equipment, espresso, pizza, etc), and the average custom SoC design takes 18-months to 2 years to complete, almost all of that $25M-$50M design cost is accounted for by the size of the engineering team required.  While we hear a lot about mask costs and other non-recurring-engineering (NRE) charges associated with the fabrication process itself, those costs are still dwarfed by the fact that it takes just too many engineers to design a chip.

Like any task where there is too much human labor required, the solution to the SoC engineering glut is better tools and more automation of engineering.  This is the responsibility of the EDA industry – which is represented here at DAC.  At the conference, we have also heard data that the EDA industry currently spends somewhere in the realm of $1B developing tools to support each new process node, and that cost is increasing as the complexity of the tools required continues to rise. 

That number is problematic on both ends. 

First, in a process with a huge deficit of automation, the world is investing only the equivalent of twenty to forty chip designs on designing the tools to solve the single biggest problem – skyrocketing costs.  EDA can’t afford to invest more, however.  They don’t make enough.  With competition in the tools business, the first thing to go is pricing discipline.  EDA companies lower prices to compete with each other, and the result is a significant reduction in the total money coming into EDA. 

To follow through this logic circle a bit:  Almost nobody can afford to design custom SoCs today because it’s too expensive.  It’s too expensive because the EDA tools aren’t good enough.  The EDA tools aren’t good enough because the EDA industry doesn’t earn enough money to invest sufficiently in R&D to make better tools.  The EDA industry doesn’t earn enough money because there aren’t enough people designing custom SoCs.  Loop back to step one.

When the EDA industry was born, it was because economy of scale was available in creating design tools that worked across a broad set of semiconductor fabs.  Today, with the incredible consolidation that’s happened in the fab world, it’s not clear that economy of scale still exists.  We might well have better tools if each semiconductor fabrication company owned its own tools – and distributed them – just like in the old days.  Then, fabs could compete on tools as well as process.  If independent EDA companies could make tools that would work better, they could try to OEM them to the semi companies.  This model would allow the cost of the tools to be amortized over the silicon revenues, and each semi company could decide how much to invest in tools in order to keep their offering competitive. 

Before you go calling this idea crazy, it’s exactly what is working today in the FPGA business.  The majority of the world’s electronic designers DO get tools from the same place they buy chips – from their FPGA supplier.  Those designers pay almost nothing for their tools, yet the tool development efforts thrive.  In the FPGA business, tools are a huge part of the competitive picture.  If an FPGA company wants to beat their competitors in performance, they can optimize their hardware architecture, or they can tune their synthesis and place-and-route software to give better results.  Either approach gives them a competitive edge. 

With the current and coming generation of FPGAs, enormous capabilities are on tap.  Complex SoCs will be dropped on our desk for a relative pittance, and the whole embedded subsystem will already work.  If your SoC will be something like some ARM processors with memory and peripherals all connected via the standard protocols – along with some commercially-available IP blocks for standard functions, plus a little bit of magic your team adds themselves, you can probably do it in an FPGA, and most of the work is done before you buy the chip.  If you compare that with the ASIC/COT/SoC design flow, where a reported 70% of the engineering is spent on design verification, the FPGA folks are almost done before they start.  Walk down the boothways at DAC and look at the tough problems that are being tackled by the tool suppliers.  For 90% of those problems, one could say “Not applicable to FPGA design.”

As a result, in the FPGA option, instead of 100-200 engineers to design a complex SoC – it takes one to five.  Amortize those cost differences out over the expected volume run of your product and see how much each of those custom SoCs really costs you.  Oh, and don’t forget that, a couple of years from now, when you have a better idea for your product or it has to support some new standard to be current, your ASIC/SoC-based product will need a do-over.  Your FPGA-based system may be OK with a simple update in the field.

The combination of the EDA problem and the FPGA solution paint a pretty compelling picture of the potential future of custom SoC design.  If your business card doesn’t say something like “Apple” or one of about ten other mega-titles, you probably won’t be doing any, and you may also not be doing any business with the main companies represented here at DAC.  That’s a problem that EDA needs to solve even more urgently than the 20nm process node, double-patterning, or through-silicon vias.  It’s a problem of the survival of their industry.  I hope for their sake that they can sort it out.

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

IoT Data Analysis at the Edge
No longer is machine learning a niche application for electronic engineering. Machine learning is leading a transformative revolution in a variety of electronic designs but implementing machine learning can be a tricky task to complete. In this episode of Chalk Talk, Amelia Dalton and Louis Gobin from STMicroelectronics investigate how STMicroelectronics is helping embedded developers design edge AI solutions. They take a closer look at the benefits of STMicroelectronics NanoEdge-AI® Studio and  STM32Cube.AI and how you can take advantage of them in your next design. 
Jun 28, 2023
31,573 views