feature article
Subscribe Now

Power’s Progress

Cadence Looks at Migration; Baum Looks at Emulation

Speed must be feeling pretty unloved these days. I mean, let’s face it: there was a time when the mettle of any decent chip was gauged by how fast it could go. Having a performance issue? Aw, just juice it up a little – there’s plenty more current where that came from!

Then it became harder to sacrifice power for anything – it was no longer free. It wasn’t expensive, but it had to be considered, much to timing’s annoyance. Now timing couldn’t get what it wanted anytime just because it wanted it. It had to prove that the extra power was really worth it. Or perhaps some redesign was necessary. Or re-synthesis with new constraints.

Timing was still first fiddle, but there was a second-fiddle upstart that folks were starting to listen to. Timing suspected that it was but a matter of time before power challenged timing for first position.

And then cooling got to be an issue. And batteries happened more than ever before. And data warehouses started needing their own personal nuclear power plants (figuratively speaking – or is it?) in order to remain viable. Now power wasn’t just threatening first position; it was woodshedding like a demon, honing its skills so that, come challenge time, there would be no question.

The Great Migration

And now? Geez, yeah, performance is still a goal, but power is now practically a solo performer, with everyone else relegated to the role of backup, doing the electronic equivalent of the doo-ops (not to be confused with the no-ops). Exhibit A as evidence? The fact that, according to Cadence in a conversation at DAC, power sign-off at the 7-nm node is now taking longer than timing sign-off. Which also suggests that a burst of energy is being put into power-analysis tools.

Granted, few people are designing at 7 nm (and, as we’ll discuss in the future, at least one foundry is placing 7 nm on hold), but it’s not like it’s not going to happen ever. And, if the lurch to the next node is, in fact, taking a breather, perhaps it’s a chance for tools to catch up.

Now, straight extrapolation would suggest that the reason it’s so hard to achieve power sign-off is simply that power analysis takes too long or that fixes are tedious to do. Underlying that, the obvious problem might seem to be that power is just too high. But, again according to Cadence, that’s not the issue. I mean, OK, maybe it’s an issue, but it’s not the main one. The main one is: electromigration.

It’s turning out that, at 7 nm, even signal wires are subject to electromigration. As a result, the electromigration rules are incredibly complex. As a further result, analysis is backing off of the entire-chip level and focusing more on each block. Analysis can be run more quickly (with Cadence speaking for their Voltus (digital) and Voltus-Fi (analog) power-analysis tools), with fixes being automatically applied during place-and-route (leaving a few manual chip-level ones).

The Great Emulation

One way of accelerating simulation is through emulation. According to Baum in a separate DAC conversation, customers are keenly interested in running power analysis alongside not just simulation, but also emulation. And emulation is becoming more popular because it allows designers to see what the power profile of their chip will be when running specific software. Simulators can’t run software (OK, technically speaking, they can – if you have a few years to await the results); emulators can (if not application software, then, at the very least, low-level driver software).

Running power analysis with simulation is easy (as such things go) because there’s a simulation API that allows the simulator and the power analyzer to tag-team the problem in real time. As the simulator runs, it can feed info to the power analysis tool. Alas, says Baum, there is no corresponding API in an emulator, and so they must work on a file-exchange basis.

So, in this first phase of emulation-assisted power analysis, you run emulation first to generate a file with the signal transition information and then feed that file to the power-analysis engine. The second phase involves what Baum characterizes as interest on the part of emulator makers in providing the necessary API that would enable concurrent emulation and power analysis, much the same way it’s doable in simulation today.

There’s a potential bottleneck awaiting that approach, however, that probably needs to be addressed alongside addition of the API: bandwidth for data exchange. The emulator sends over data on every cycle – whether or not that particular cycle has any relevance for power. Got a signal that stays low for 10,000 cycles? You’ll get 10,000 cycles of data confirming that, yup, it’s still low. And you have to transport that data and put it somewhere before you can figure out that, yeah, I didn’t really need that after all.

So having the emulator send data over only when specified signals change – and perhaps only for those signals that did change – would reduce the amount of data exchanged to only that which matters for power. This would likely remove the bottleneck lurking in the shadows at the moment.

While having an emulator feed the power analysis tool – hosted not in the emulator but in an accompanying… um… host computer – would help speed things up, there’s yet another possible speed-up that would lead to a third phase of power emulation. And that opportunity comes from literally synthesizing the power model and putting it onto the emulator. Now there’s no data exchange to the host at all; power is calculated at emulation speed, all within the emulator.

Yeah, I know what you might be thinking: power models are analog, right? Cuz current is analog? How the heck do you synthesize something like that into an emulator? I went back to last-year’s coverage and noted that Baum is all about doing power analysis at the RTL level, which pretty much rules out analog. But, even if it’s in an RTL-level language, that doesn’t mean that it’s synthesizable. But I did confirm with Baum that their models are, in fact, synthesizable.

But, to be clear, this vision of a synthesized model isn’t a specific product plan. It’s simply an idea of what could be farther down the pipeline as power analysis continues its rapid evolution.

 

More info:

Baum power analysis

Cadence Voltus

Cadence Voltus-Fi

One thought on “Power’s Progress”

Leave a Reply

featured blogs
Oct 23, 2020
The Covid-19 pandemic continues to impact our lives in both expected and unexpected ways. Unfortunately, one of the expected ways is a drop in charitable donations. Analysts predict anywhere from a 6% decrease '€“ with many planning for a bigger decline than that. Also, mor...
Oct 23, 2020
[From the last episode: We noted that some inventions, like in-memory compute, aren'€™t intuitive, being driven instead by the math.] We have one more addition to add to our in-memory compute system. Remember that, when we use a regular memory, what goes in is an address '...
Oct 23, 2020
Any suggestions for a 4x4 keypad in which the keys aren'€™t wobbly and you don'€™t have to strike a key dead center for it to make contact?...
Oct 23, 2020
At 11:10am Korean time this morning, Cadence's Elias Fallon delivered one of the keynotes at ISOCC (International System On Chip Conference). It was titled EDA and Machine Learning: The Next Leap... [[ Click on the title to access the full blog on the Cadence Community ...

featured video

Demo: Low-Power Machine Learning Inference with DesignWare ARC EM9D Processor IP

Sponsored by Synopsys

Applications that require sensing on a continuous basis are always on and often battery operated. In this video, the low-power ARC EM9D Processors run a handwriting character recognition neural network graph to infer the letter that is written.

Click here for more information about DesignWare ARC EM9D / EM11D Processors

featured paper

An engineer’s guide to autonomous and collaborative industrial robots

Sponsored by Texas Instruments

As robots are becoming more commonplace in factories, it is important that they become more intelligent, autonomous, safer and efficient. All of this is enabled with precise motor control, advanced sensing technologies and processing at the edge, all with robust real-time communication. In our e-book, an engineer’s guide to industrial robots, we take an in-depth look at the key technologies used in various robotic applications.

Click here to download the e-book

Featured Chalk Talk

Mom, I Have a Digital Twin? Now You Tell Me?

Sponsored by Cadence Design Systems

Today, one engineer’s “system” is another engineer’s “component.” The complexity of system-level design has skyrocketed with the new wave of intelligent systems. In this world, optimizing electronic system designs requires digital twins, shifting left, virtual platforms, and emulation to sort everything out. In this episode of Chalk Talk, Amelia Dalton chats with Frank Schirrmeister of Cadence Design Systems about system-level optimization.

Click here for more information