feature article
Subscribe Now

The New Silicon Productivity Gap

One Clue: It’s Not Silicon

Scaling is a wonderful thing. As we’ve been able to put more and more transistors in less and less space, all we have to do is plot the magnificence of the single-chip mega-widgetry we’ll be able to create in the years to come, and the prospects get our salivary glands going.

So, flush with the promise of the upcoming grandeur of things to be, we march on with visions of digital sugarplums dancing in our heads. Until one of those annoying guys in the meeting – you know, the one who’s always trying to toss some reality into the discussion? The one who always brings the party down by pointing out inconvenient things we’d rather not think about? That guy who will never be promoted because he lacks the innate ability to inspire people to agree with infeasible goals and then blame the team when the goals aren’t met? Yeah, him. So he asks, “Um… not to be a downer or anything, but… we currently design 100,000 gates in 18 months. We’ll be able to put a thousand times more gates than that on silicon, but how are we going to design it without it taking a thousand times as long?”

Typically, at this point, there would be an awkward silence in the room, maybe a little shuffling of feet or throat-clearing. Then someone, eager for that promotion, would say something like, “We’ll just have to work smarter, won’t we?” and, beaming proudly, would look around the room for validation by his peers and, most importantly, his boss. And someone else will chortle and say, “Except for the extra time we’ll need to make more trips to the bank!” And everyone will laugh and slap each other’s backs, and the tension will be relieved, and the topic will have been averted, and everyone can go on as before without having to deal with Mr. Sourpuss in the corner.

Inexorably, however, the fact that designing gates isn’t the same as making room for them on silicon became inescapable, and the alarm went up about a looming productivity gap: silicon capacity was growing far faster than the ability to fill that capacity. EDA rushed in and raised the levels of abstraction and the headlines went away. (A couple people might have privately mentioned to That Guy that he had been right, but such acknowledgments would not be for public consumption. Definite career-killer.)

Well, Cadence’s Micha? Siwi?ski has been looking at accumulating data, and his take is that there is a new productivity gap. (Well, perhaps newly acknowledged? That Guy probably brought this up in some meeting in a big chip company, like, five years ago, but… well, it was just such a buzzkill…) And this isn’t simply a repetition of the one we’ve already seen: by his reckoning, this gap is six – 6 – times larger than the silicon productivity gap of the 90s.

And this gap has implications for a tools market that isn’t used to spending money: it’s for software.

You see, there’s too much hardware on a chip now for it to remain a hardware play. Any self-respecting SoC comes with software. And that software has to work. In fact, “just working” isn’t good enough: it has to work efficiently with the underlying hardware. If it doesn’t, then a hardware tweak might be in order. Which means… the software has to be vetted before tape-out (at least to some extent).

There has been a huge gap between what companies are willing to pay for software and hardware tools. Actually, that’s not quite true: the gap is between silicon design tools and any other tools (since FPGA tools are the poster-child for “free” hardware design tools). And, just as it is with FPGA tools, wise industry veterans will counsel you that “no one pays for software tools.”

The only thing that keeps silicon design tools out of the current modern trend of, “Wait, what? I have to pay for stuff??” is fear: that gnawing 2 AM feeling that it might be necessary to respin the masks. For millions of dollars.

Well, software has joined the ranks of Things To Lose Sleep Over. If the SoC software isn’t working well, then it might be an indication of a hardware problem that might force a mask spin.

As was the case for alleviating the silicon design productivity gap, abstraction is part of the solution here as well. It happens in three steps, according to the maturity of the hardware. Early, when there is no hardware at all except in the imagination of the architects, virtual prototypes provide a way to get a high-level understanding of how an underlying platform will handle the kind of software and data sets that are likely to challenge the workings of the system. Here the hardware is completely abstracted as a set of transaction-level models.

The goal at this point is to ensure that the original design of the system accommodates the real software needs of the system. It’s expected that the hardware will be dinked with to tune it up and make the necessary tradeoffs between performance, power, and area. In fact, according to Micha? (based on IBS data), 7% of hardware development is architecture, while closer to 20% of software development consists of architecture work. The virtual prototype is where they come together.

From architecture we move to implementation, which typically means engaging the Silos of Silence as hardware and software teams go about their independent business. But they must come together again before tape-out to make sure that the promise of the architects has been realized in software and hardware that work well together.

But simulation is simply too slow to run real software to any extent (especially if an OS has to boot first). Here we engage step two: emulation. Emulation used to be more about running a system in the face of real-world peripherals, traffic, and data. But this use model is being rapidly overtaken by the new emulator raison d’être: executing software on an implementation of the hardware. Not the final silicon implementation, but the RTL implementation of the design, realized in one of the big emulators from Cadence, Eve, and Mentor.

This is where the real “silicon checkout” occurs. Realistic suites of software are run on the hardware to exercise a range of scenarios and to find any unanticipated glitches. Emulators aren’t inexpensive desktop toys; companies plunk down some serious coin when acquiring them. Of course, they were originally making that investment for hardware checkout – now they’re doing it for software checkout as well. Look Mom, someone paid money for a software tool!

There’s even enough belief in the willingness of companies to invest in software tools – in this context – to stimulate the creation of an entire new tool to create C tests automatically that will stress the architecture in ways that normal software probably wouldn’t. This is the brainchild of startup Breker.

At this stage of development, not all the software is ready. And, realistically, no one expects to check out all the software on an emulator. The really important parts are those that touch the hardware. Just getting to the command prompt at the end of an OS boot process is half the battle won. Drivers and anything else low-level are center-stage during this phase.

Once the hardware has been deemed stable, the design – once loosely implemented on the emulator – can be more tightly implemented on a prototype board. This third step has two benefits: one is that prototype boards can run faster than an emulator (while taking longer to implement); the second is that they are cheaper (“cheap” being a relative term, with a 12”x12” board costing more than some cars) and can be given to more software developers to use as a target platform as they continue writing code while the chip works its way through the fab. Although, with the increasing prevalence of virtual platforms, actual hardware is becoming less and less necessary.

So tools are coming to the productivity rescue for software, taking their place alongside the hardware tools. And people appear willing to spend money on them. And there is room for more. Especially with multicore systems, increasingly the norm, debugging and analysis are behind the curve. And managing real-time performance in safety-critical systems can be even harder when concurrency reduces system determinism.

Tools for figuring out what’s going on inside bewilderingly complex systems will contribute more than their fair share of the productivity gap until it finally gets wrestled down into submission. And, at that junction between hardware and software, the tool sets abut as well. Hardware debugger meets software debugger, and they look different, and they’re run by different folks; there’s plenty of room to smooth out the process of dispositioning a problem as related to hardware or software (or both).

There’s one other element that will boost software productivity. And it is one of the key reasons why silicon design is now more productive: IP. Whether internal or external, software developers are turning elsewhere for anything they can get away with not inventing. So much so that one company, Protecode, makes its business helping large companies to keep track of the various licenses and obligations attached to the bits and pieces of software acquired from all over the place.

So between tools and IP, you can bet your bottom dollar that more and more companies will pay top dollar to ensure that the software they’re writing plays well with the hardware they’re building. It’s time to start closing the software productivity gap.      

2 thoughts on “The New Silicon Productivity Gap”

  1. About a year and a half ago I finished an imaging board for ore analysis using an X-Ray sensor and a lot of image processing power in 6 sizeable FPGA’s. The embedded software is just now stabilizing, and the GUI project is about to start…

Leave a Reply

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Peak Power Introduction and Solutions
Sponsored by Mouser Electronics and MEAN WELL
In this episode of Chalk Talk, Amelia Dalton and Karim Bheiry from MEAN WELL explore why motors and capacitors need peak current during startup, the parameters to keep in mind when choosing your next power supply for these kind of designs, and the specific applications where MEAN WELL’s enclosed power supplies with peak power would bring the most benefit.
Jan 22, 2024
12,449 views