feature article
Subscribe Now

Using Power and Integrity in the Same Sentence

Apache Provides Hierarchical Dynamic Power Integrity Analysis

Power is seductive. It has attracted the attention of universities, designers, tool vendors, journalists, and anyone who wants to be anyone in the silicon and systems worlds. Of course, unlike the situation in so many hallowed halls around the world, the aim here is to reduce power, not increase it (or gather it for oneself). Out of the limelight, however, is the stability of the power infrastructure: how robust is a chip’s power grid in the face of potentially wild gyrations of power draw as a chip is put through its paces? This is the realm of power integrity (two words that, in too many other contexts, have no place near each other), and, like signal integrity with respect to signal speed, it is all too often relegated to second-order status as compared to power.

There are a number of reasons why power integrity considerations are more important today than they once were. Back when chips were simpler conglomerations of data path and logic, there were obvious sources of noise that could be addressed in an ad hoc fashion. Heck, before that, transitions were slow enough that noise wasn’t even as much of an issue, period. Now there are numerous potential sources, and just blindly trying to fix things that might be a problem may not fix anything and will likely result in over-design. With multi-mode circuits, the range of power consumption and activity can vary widely; sharp changes in power can create noise, and the noise profile may vary depending on which mode the circuit is in. And, increasingly, chips are designed by larger teams of engineers, including ones not even officially on the team: those that designed the IP that may be purchased for use in the chip. Ownership of a particular problem is not at all clear.

Power integrity tools address these issues and are not a new concept. But until lately, if you wanted a hierarchical approach to power integrity, you had to use static analysis. If you wanted to use dynamic analysis, you had to use a flattened design. There have apparently been some attempts at a dynamic hierarchical approach, but they haven’t really been able to address all sources of noise and have hence been less accurate.

Apache has recently announced a dynamic hierarchical capability, RedHawk-NX, that they say is the first to have no loss of accuracy as compared to a flattened analysis. They do so in a way that doesn’t require a huge number of vectors and that can even take into account IP without giving away the contents of the IP. We can take a look at these characteristics by examining their database and looking at how they handle dynamic stimulation.

At first blush it would seem that a fully accurate power model would, by necessity, contain a complete set of information about the structure of the chip. After all, each transistor in transit can contribute to power, so each transistor must be considered. Leave out any part of the circuit and you’re no longer accurate. This, of course, isn’t what IP providers, in particular, like to hear. They’re loath to give out a full schematic of what they’ve worked hard at, assuming they’re providing a GDS-II rendition of their product. Even synthesizable blocks may be encrypted.

In order to address this, the Apache tool creates a database of power information for each element in the circuit without actually providing the details of how the circuit elements are built or interconnected. It is layout-aware, so the impact of physical proximity will be considered, but it doesn’t record that information in the model. Instead, the geometries and adjacencies and such are all used to create power parameters; those power contributions are stored in the database, and the layout information used to calculate them is then discarded. What results is a set of numbers that accurately reflect relevant power noise information without storing how that information was generated. This can be done, for example, by an IP provider based on their circuit, and then the model – sans the actual circuit information – can be provided to the IP customer for use in full-chip power integrity analysis.

The second element in RedHawk-NX is the creation of stimulus. They have eschewed the usual vector change dump (VCD) file, which is often used as an indication of signal transitions but apparently isn’t popular with backend engineers. As a friendlier alternative, the tool automatically searches the design, inferring state machine trajectories and signal flow, and identifies the transition(s) that create(s) the greatest number of signal transitions. This can then be used as the stimulus for analysis; it’s typically only a few vectors.

But aha, you say, you can do that only if you have full knowledge of the circuit, and we just saw that the database doesn’t contain that information. OK, then, do it block by block. But, you rejoin, the worst-case situation for a block may not be the worst full-chip case once the block is hooked into the other blocks. It’s only a “local” worst-case; the “global” worst-case may involve a situation that isn’t the worst-case for that block. So now what?

The answer is that there’s a bit more flexibility in setting up stimulus scenarios, and the setup of a full-chip analysis isn’t necessarily hands-off. A designer that owns a particular block – and this especially applies to IP creators – can create a number of worst-case vector sets. Different modes of the block can be analyzed, with a stimulus created for each one. Each of these modes can be named, and these stimulus sets are then provided with the block or IP. When assembling them and doing full-chip analysis, the integrator (or whoever gets the unenviable task of figuring out whether the individual bits play nicely together) can specify which mode or modes to use in the analysis.

When doing it this way, the entire chip isn’t automatically analyzed for a global worst-case; instead, an engineer picks and chooses modes in a manner that, presumably, results in meaningful worst-case analyses. So, again, IP can be provided in a manner that includes information that’s useful for analysis but doesn’t reveal the recipe for the secret sauce. As a result, the tool should be able to provide analysis that’s completely accurate, since it’s derived from a transistor-by-transistor circuit analysis; provide circuit anonymity where desired; and allow block-by-block hierarchical analysis so that the pieces and their contributions to the whole can be more easily studied, and any problems can be more easily fixed.

Leave a Reply

featured blogs
Aug 17, 2018
Samtec’s growing portfolio of high-performance Silicon-to-Silicon'„¢ Applications Solutions answer the design challenges of routing 56 Gbps signals through a system. However, finding the ideal solution in a single-click probably is an obstacle. Samtec last updated the...
Aug 17, 2018
If you read my post Who Put the Silicon in Silicon Valley? then you know my conclusion: Let's go with Shockley. He invented the transistor, came here, hired a bunch of young PhDs, and sent them out (by accident, not design) to create the companies, that created the compa...
Aug 16, 2018
All of the little details were squared up when the check-plots came out for "final" review. Those same preliminary files were shared with the fab and assembly units and, of course, the vendors have c...
Aug 14, 2018
I worked at HP in Ft. Collins, Colorado back in the 1970s. It was a heady experience. We were designing and building early, pre-PC desktop computers and we owned the market back then. The division I worked for eventually migrated to 32-bit workstations, chased from the deskto...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...