feature article
Subscribe Now

Using Power and Integrity in the Same Sentence

Apache Provides Hierarchical Dynamic Power Integrity Analysis

Power is seductive. It has attracted the attention of universities, designers, tool vendors, journalists, and anyone who wants to be anyone in the silicon and systems worlds. Of course, unlike the situation in so many hallowed halls around the world, the aim here is to reduce power, not increase it (or gather it for oneself). Out of the limelight, however, is the stability of the power infrastructure: how robust is a chip’s power grid in the face of potentially wild gyrations of power draw as a chip is put through its paces? This is the realm of power integrity (two words that, in too many other contexts, have no place near each other), and, like signal integrity with respect to signal speed, it is all too often relegated to second-order status as compared to power.

There are a number of reasons why power integrity considerations are more important today than they once were. Back when chips were simpler conglomerations of data path and logic, there were obvious sources of noise that could be addressed in an ad hoc fashion. Heck, before that, transitions were slow enough that noise wasn’t even as much of an issue, period. Now there are numerous potential sources, and just blindly trying to fix things that might be a problem may not fix anything and will likely result in over-design. With multi-mode circuits, the range of power consumption and activity can vary widely; sharp changes in power can create noise, and the noise profile may vary depending on which mode the circuit is in. And, increasingly, chips are designed by larger teams of engineers, including ones not even officially on the team: those that designed the IP that may be purchased for use in the chip. Ownership of a particular problem is not at all clear.

Power integrity tools address these issues and are not a new concept. But until lately, if you wanted a hierarchical approach to power integrity, you had to use static analysis. If you wanted to use dynamic analysis, you had to use a flattened design. There have apparently been some attempts at a dynamic hierarchical approach, but they haven’t really been able to address all sources of noise and have hence been less accurate.

Apache has recently announced a dynamic hierarchical capability, RedHawk-NX, that they say is the first to have no loss of accuracy as compared to a flattened analysis. They do so in a way that doesn’t require a huge number of vectors and that can even take into account IP without giving away the contents of the IP. We can take a look at these characteristics by examining their database and looking at how they handle dynamic stimulation.

At first blush it would seem that a fully accurate power model would, by necessity, contain a complete set of information about the structure of the chip. After all, each transistor in transit can contribute to power, so each transistor must be considered. Leave out any part of the circuit and you’re no longer accurate. This, of course, isn’t what IP providers, in particular, like to hear. They’re loath to give out a full schematic of what they’ve worked hard at, assuming they’re providing a GDS-II rendition of their product. Even synthesizable blocks may be encrypted.

In order to address this, the Apache tool creates a database of power information for each element in the circuit without actually providing the details of how the circuit elements are built or interconnected. It is layout-aware, so the impact of physical proximity will be considered, but it doesn’t record that information in the model. Instead, the geometries and adjacencies and such are all used to create power parameters; those power contributions are stored in the database, and the layout information used to calculate them is then discarded. What results is a set of numbers that accurately reflect relevant power noise information without storing how that information was generated. This can be done, for example, by an IP provider based on their circuit, and then the model – sans the actual circuit information – can be provided to the IP customer for use in full-chip power integrity analysis.

The second element in RedHawk-NX is the creation of stimulus. They have eschewed the usual vector change dump (VCD) file, which is often used as an indication of signal transitions but apparently isn’t popular with backend engineers. As a friendlier alternative, the tool automatically searches the design, inferring state machine trajectories and signal flow, and identifies the transition(s) that create(s) the greatest number of signal transitions. This can then be used as the stimulus for analysis; it’s typically only a few vectors.

But aha, you say, you can do that only if you have full knowledge of the circuit, and we just saw that the database doesn’t contain that information. OK, then, do it block by block. But, you rejoin, the worst-case situation for a block may not be the worst full-chip case once the block is hooked into the other blocks. It’s only a “local” worst-case; the “global” worst-case may involve a situation that isn’t the worst-case for that block. So now what?

The answer is that there’s a bit more flexibility in setting up stimulus scenarios, and the setup of a full-chip analysis isn’t necessarily hands-off. A designer that owns a particular block – and this especially applies to IP creators – can create a number of worst-case vector sets. Different modes of the block can be analyzed, with a stimulus created for each one. Each of these modes can be named, and these stimulus sets are then provided with the block or IP. When assembling them and doing full-chip analysis, the integrator (or whoever gets the unenviable task of figuring out whether the individual bits play nicely together) can specify which mode or modes to use in the analysis.

When doing it this way, the entire chip isn’t automatically analyzed for a global worst-case; instead, an engineer picks and chooses modes in a manner that, presumably, results in meaningful worst-case analyses. So, again, IP can be provided in a manner that includes information that’s useful for analysis but doesn’t reveal the recipe for the secret sauce. As a result, the tool should be able to provide analysis that’s completely accurate, since it’s derived from a transistor-by-transistor circuit analysis; provide circuit anonymity where desired; and allow block-by-block hierarchical analysis so that the pieces and their contributions to the whole can be more easily studied, and any problems can be more easily fixed.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

FleClear: TDK’s Transparent Conductive Ag Film
Sponsored by Mouser Electronics and TDK
In this episode of Chalk Talk, Amelia Dalton and Chris Burket from TDK investigate the what, where, and how of TDK’s transparent conductive Ag film called FleClear. They examine the benefits that FleClear brings to the table when it comes to transparency, surface resistance and haze. They also chat about how FleClear compares to other similar solutions on the market today and how you can utilize FleClear in your next design.
Feb 7, 2024
23,543 views