feature article
Subscribe Now

The New Silicon Productivity Gap

One Clue: It’s Not Silicon

Scaling is a wonderful thing. As we’ve been able to put more and more transistors in less and less space, all we have to do is plot the magnificence of the single-chip mega-widgetry we’ll be able to create in the years to come, and the prospects get our salivary glands going.

So, flush with the promise of the upcoming grandeur of things to be, we march on with visions of digital sugarplums dancing in our heads. Until one of those annoying guys in the meeting – you know, the one who’s always trying to toss some reality into the discussion? The one who always brings the party down by pointing out inconvenient things we’d rather not think about? That guy who will never be promoted because he lacks the innate ability to inspire people to agree with infeasible goals and then blame the team when the goals aren’t met? Yeah, him. So he asks, “Um… not to be a downer or anything, but… we currently design 100,000 gates in 18 months. We’ll be able to put a thousand times more gates than that on silicon, but how are we going to design it without it taking a thousand times as long?”

Typically, at this point, there would be an awkward silence in the room, maybe a little shuffling of feet or throat-clearing. Then someone, eager for that promotion, would say something like, “We’ll just have to work smarter, won’t we?” and, beaming proudly, would look around the room for validation by his peers and, most importantly, his boss. And someone else will chortle and say, “Except for the extra time we’ll need to make more trips to the bank!” And everyone will laugh and slap each other’s backs, and the tension will be relieved, and the topic will have been averted, and everyone can go on as before without having to deal with Mr. Sourpuss in the corner.

Inexorably, however, the fact that designing gates isn’t the same as making room for them on silicon became inescapable, and the alarm went up about a looming productivity gap: silicon capacity was growing far faster than the ability to fill that capacity. EDA rushed in and raised the levels of abstraction and the headlines went away. (A couple people might have privately mentioned to That Guy that he had been right, but such acknowledgments would not be for public consumption. Definite career-killer.)

Well, Cadence’s Micha? Siwi?ski has been looking at accumulating data, and his take is that there is a new productivity gap. (Well, perhaps newly acknowledged? That Guy probably brought this up in some meeting in a big chip company, like, five years ago, but… well, it was just such a buzzkill…) And this isn’t simply a repetition of the one we’ve already seen: by his reckoning, this gap is six – 6 – times larger than the silicon productivity gap of the 90s.

And this gap has implications for a tools market that isn’t used to spending money: it’s for software.

You see, there’s too much hardware on a chip now for it to remain a hardware play. Any self-respecting SoC comes with software. And that software has to work. In fact, “just working” isn’t good enough: it has to work efficiently with the underlying hardware. If it doesn’t, then a hardware tweak might be in order. Which means… the software has to be vetted before tape-out (at least to some extent).

There has been a huge gap between what companies are willing to pay for software and hardware tools. Actually, that’s not quite true: the gap is between silicon design tools and any other tools (since FPGA tools are the poster-child for “free” hardware design tools). And, just as it is with FPGA tools, wise industry veterans will counsel you that “no one pays for software tools.”

The only thing that keeps silicon design tools out of the current modern trend of, “Wait, what? I have to pay for stuff??” is fear: that gnawing 2 AM feeling that it might be necessary to respin the masks. For millions of dollars.

Well, software has joined the ranks of Things To Lose Sleep Over. If the SoC software isn’t working well, then it might be an indication of a hardware problem that might force a mask spin.

As was the case for alleviating the silicon design productivity gap, abstraction is part of the solution here as well. It happens in three steps, according to the maturity of the hardware. Early, when there is no hardware at all except in the imagination of the architects, virtual prototypes provide a way to get a high-level understanding of how an underlying platform will handle the kind of software and data sets that are likely to challenge the workings of the system. Here the hardware is completely abstracted as a set of transaction-level models.

The goal at this point is to ensure that the original design of the system accommodates the real software needs of the system. It’s expected that the hardware will be dinked with to tune it up and make the necessary tradeoffs between performance, power, and area. In fact, according to Micha? (based on IBS data), 7% of hardware development is architecture, while closer to 20% of software development consists of architecture work. The virtual prototype is where they come together.

From architecture we move to implementation, which typically means engaging the Silos of Silence as hardware and software teams go about their independent business. But they must come together again before tape-out to make sure that the promise of the architects has been realized in software and hardware that work well together.

But simulation is simply too slow to run real software to any extent (especially if an OS has to boot first). Here we engage step two: emulation. Emulation used to be more about running a system in the face of real-world peripherals, traffic, and data. But this use model is being rapidly overtaken by the new emulator raison d’être: executing software on an implementation of the hardware. Not the final silicon implementation, but the RTL implementation of the design, realized in one of the big emulators from Cadence, Eve, and Mentor.

This is where the real “silicon checkout” occurs. Realistic suites of software are run on the hardware to exercise a range of scenarios and to find any unanticipated glitches. Emulators aren’t inexpensive desktop toys; companies plunk down some serious coin when acquiring them. Of course, they were originally making that investment for hardware checkout – now they’re doing it for software checkout as well. Look Mom, someone paid money for a software tool!

There’s even enough belief in the willingness of companies to invest in software tools – in this context – to stimulate the creation of an entire new tool to create C tests automatically that will stress the architecture in ways that normal software probably wouldn’t. This is the brainchild of startup Breker.

At this stage of development, not all the software is ready. And, realistically, no one expects to check out all the software on an emulator. The really important parts are those that touch the hardware. Just getting to the command prompt at the end of an OS boot process is half the battle won. Drivers and anything else low-level are center-stage during this phase.

Once the hardware has been deemed stable, the design – once loosely implemented on the emulator – can be more tightly implemented on a prototype board. This third step has two benefits: one is that prototype boards can run faster than an emulator (while taking longer to implement); the second is that they are cheaper (“cheap” being a relative term, with a 12”x12” board costing more than some cars) and can be given to more software developers to use as a target platform as they continue writing code while the chip works its way through the fab. Although, with the increasing prevalence of virtual platforms, actual hardware is becoming less and less necessary.

So tools are coming to the productivity rescue for software, taking their place alongside the hardware tools. And people appear willing to spend money on them. And there is room for more. Especially with multicore systems, increasingly the norm, debugging and analysis are behind the curve. And managing real-time performance in safety-critical systems can be even harder when concurrency reduces system determinism.

Tools for figuring out what’s going on inside bewilderingly complex systems will contribute more than their fair share of the productivity gap until it finally gets wrestled down into submission. And, at that junction between hardware and software, the tool sets abut as well. Hardware debugger meets software debugger, and they look different, and they’re run by different folks; there’s plenty of room to smooth out the process of dispositioning a problem as related to hardware or software (or both).

There’s one other element that will boost software productivity. And it is one of the key reasons why silicon design is now more productive: IP. Whether internal or external, software developers are turning elsewhere for anything they can get away with not inventing. So much so that one company, Protecode, makes its business helping large companies to keep track of the various licenses and obligations attached to the bits and pieces of software acquired from all over the place.

So between tools and IP, you can bet your bottom dollar that more and more companies will pay top dollar to ensure that the software they’re writing plays well with the hardware they’re building. It’s time to start closing the software productivity gap.      

2 thoughts on “The New Silicon Productivity Gap”

  1. About a year and a half ago I finished an imaging board for ore analysis using an X-Ray sensor and a lot of image processing power in 6 sizeable FPGA’s. The embedded software is just now stabilizing, and the GUI project is about to start…

Leave a Reply

featured blogs
Nov 30, 2023
No one wants to waste unnecessary time in the model creation phase when using a modeling software. Rather than expect users to spend time trawling for published data and tediously model equipment items one by one from scratch, modeling software tends to include pre-configured...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

TDK CLT32 power inductors for ADAS and AD power management

Sponsored by TDK

Review the top 3 FAQs (Frequently Asked Questions) regarding TDK’s CLT32 power inductors. Learn why these tiny power inductors address the most demanding reliability challenges of ADAS and AD power management.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

Industrial Internet of Things (IIoT)
Sponsored by Mouser Electronics and Eaton
In this episode of Chalk Talk, Amelia Dalton and Mohammad Mohiuddin from Eaton explore the components, communication protocols, and sensing solutions needed for today’s growing IIoT infrastructure. They take a closer look at how Eaton's circuit protection solutions, magnetics, capacitors and terminal blocks can help you ensure the success of your next industrial internet of things design.
Jun 14, 2023
20,878 views