feature article
Subscribe Now

Using Power and Integrity in the Same Sentence

Apache Provides Hierarchical Dynamic Power Integrity Analysis

Power is seductive. It has attracted the attention of universities, designers, tool vendors, journalists, and anyone who wants to be anyone in the silicon and systems worlds. Of course, unlike the situation in so many hallowed halls around the world, the aim here is to reduce power, not increase it (or gather it for oneself). Out of the limelight, however, is the stability of the power infrastructure: how robust is a chip’s power grid in the face of potentially wild gyrations of power draw as a chip is put through its paces? This is the realm of power integrity (two words that, in too many other contexts, have no place near each other), and, like signal integrity with respect to signal speed, it is all too often relegated to second-order status as compared to power.

There are a number of reasons why power integrity considerations are more important today than they once were. Back when chips were simpler conglomerations of data path and logic, there were obvious sources of noise that could be addressed in an ad hoc fashion. Heck, before that, transitions were slow enough that noise wasn’t even as much of an issue, period. Now there are numerous potential sources, and just blindly trying to fix things that might be a problem may not fix anything and will likely result in over-design. With multi-mode circuits, the range of power consumption and activity can vary widely; sharp changes in power can create noise, and the noise profile may vary depending on which mode the circuit is in. And, increasingly, chips are designed by larger teams of engineers, including ones not even officially on the team: those that designed the IP that may be purchased for use in the chip. Ownership of a particular problem is not at all clear.

Power integrity tools address these issues and are not a new concept. But until lately, if you wanted a hierarchical approach to power integrity, you had to use static analysis. If you wanted to use dynamic analysis, you had to use a flattened design. There have apparently been some attempts at a dynamic hierarchical approach, but they haven’t really been able to address all sources of noise and have hence been less accurate.

Apache has recently announced a dynamic hierarchical capability, RedHawk-NX, that they say is the first to have no loss of accuracy as compared to a flattened analysis. They do so in a way that doesn’t require a huge number of vectors and that can even take into account IP without giving away the contents of the IP. We can take a look at these characteristics by examining their database and looking at how they handle dynamic stimulation.

At first blush it would seem that a fully accurate power model would, by necessity, contain a complete set of information about the structure of the chip. After all, each transistor in transit can contribute to power, so each transistor must be considered. Leave out any part of the circuit and you’re no longer accurate. This, of course, isn’t what IP providers, in particular, like to hear. They’re loath to give out a full schematic of what they’ve worked hard at, assuming they’re providing a GDS-II rendition of their product. Even synthesizable blocks may be encrypted.

In order to address this, the Apache tool creates a database of power information for each element in the circuit without actually providing the details of how the circuit elements are built or interconnected. It is layout-aware, so the impact of physical proximity will be considered, but it doesn’t record that information in the model. Instead, the geometries and adjacencies and such are all used to create power parameters; those power contributions are stored in the database, and the layout information used to calculate them is then discarded. What results is a set of numbers that accurately reflect relevant power noise information without storing how that information was generated. This can be done, for example, by an IP provider based on their circuit, and then the model – sans the actual circuit information – can be provided to the IP customer for use in full-chip power integrity analysis.

The second element in RedHawk-NX is the creation of stimulus. They have eschewed the usual vector change dump (VCD) file, which is often used as an indication of signal transitions but apparently isn’t popular with backend engineers. As a friendlier alternative, the tool automatically searches the design, inferring state machine trajectories and signal flow, and identifies the transition(s) that create(s) the greatest number of signal transitions. This can then be used as the stimulus for analysis; it’s typically only a few vectors.

But aha, you say, you can do that only if you have full knowledge of the circuit, and we just saw that the database doesn’t contain that information. OK, then, do it block by block. But, you rejoin, the worst-case situation for a block may not be the worst full-chip case once the block is hooked into the other blocks. It’s only a “local” worst-case; the “global” worst-case may involve a situation that isn’t the worst-case for that block. So now what?

The answer is that there’s a bit more flexibility in setting up stimulus scenarios, and the setup of a full-chip analysis isn’t necessarily hands-off. A designer that owns a particular block – and this especially applies to IP creators – can create a number of worst-case vector sets. Different modes of the block can be analyzed, with a stimulus created for each one. Each of these modes can be named, and these stimulus sets are then provided with the block or IP. When assembling them and doing full-chip analysis, the integrator (or whoever gets the unenviable task of figuring out whether the individual bits play nicely together) can specify which mode or modes to use in the analysis.

When doing it this way, the entire chip isn’t automatically analyzed for a global worst-case; instead, an engineer picks and chooses modes in a manner that, presumably, results in meaningful worst-case analyses. So, again, IP can be provided in a manner that includes information that’s useful for analysis but doesn’t reveal the recipe for the secret sauce. As a result, the tool should be able to provide analysis that’s completely accurate, since it’s derived from a transistor-by-transistor circuit analysis; provide circuit anonymity where desired; and allow block-by-block hierarchical analysis so that the pieces and their contributions to the whole can be more easily studied, and any problems can be more easily fixed.

Leave a Reply

featured blogs
Jul 29, 2021
Circuit checks enable you to analyze typical design problems, such as high impedance nodes, leakage paths between power supplies, timing errors, power issues, connectivity problems, or extreme rise... [[ Click on the title to access the full blog on the Cadence Community sit...
Jul 29, 2021
Learn why SoC emulation is the next frontier for power system optimization, helping chip designers shift power verification left in the SoC design flow. The post Why Wait Days for Results? The Next Frontier for Power Verification appeared first on From Silicon To Software....
Jul 28, 2021
Here's a sticky problem. What if the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries?...
Jul 9, 2021
Do you have questions about using the Linux OS with FPGAs? Intel is holding another 'Ask an Expert' session and the topic is 'Using Linux with Intel® SoC FPGAs.' Come and ask our experts about the various Linux OS options available to use with the integrated Arm Cortex proc...

featured video

Adopt a Shift-left Methodology to Accelerate Your Product Development Process

Sponsored by Cadence Design Systems

Validate your most sophisticated SoC designs before silicon and stay on schedule. Balance your workload between simulation, emulation and prototyping for complete system validation. You need the right tool for the right job. Emulation meets prototyping -- Cadence Palladium and Protium Dynamic Duo for IP/SoC verification, hardware and software regressions, and early software development.

More information about Emulation and Prototyping

featured paper

Configure the backup voltage in a reversible buck/boost regulator

Sponsored by Maxim Integrated

This application note looks at a reference circuit design using Maxim’s MAX38888, which provides a supercapacitor-based power backup in the absence of the system rail by discharging its stored charge. The backup voltage provided by the regulator from the super cap is 12.5% less than the system rail when the system rail is removed. This note explains how to maintain the backup voltage within 5% of the minimum SYS charge voltage.

Click to read more

featured chalk talk

Industrial CbM Solutions from Sensing to Actionable Insight

Sponsored by Mouser Electronics and Analog Devices

Condition based monitoring (CBM) has been a valuable tool for industrial applications for years but until now, the adoption of this kind of technology has not been very widespread. In this episode of Chalk Talk, Amelia Dalton chats with Maurice O’Brien from Analog Devices about how CBM can now be utilized across a wider variety of industrial applications and how Analog Device’s portfolio of CBM solutions can help you avoid unplanned downtime in your next industrial design.

Click here for more information about Analog Devices Inc. Condition-Based Monitoring (CBM)