feature article
Subscribe Now

Necessary and Sufficient?

A Closer Look at Cell-Aware Modeling

Chip testing is always a delicate balance between testing enough and not testing too much. In reality, you want to find the “necessary and sufficient” set of tests for best quality at the lowest test cost. That’s a tough thing to get right.

Throw on top of that goal the fact that SoCs and other modern digital chips require automation to generate test vectors. Even if you find that perfect test balance, if you can’t figure out how to craft an algorithm to implement that balance automatically, it becomes an academic exercise.

Mentor wrote an article a couple of months ago outlining their “cell-aware” fault modeling and contrasted it conceptually to stuck-at fault models, but then ran experiments showing coverage and number of vectors as compared to a gate-exhaustive approach. Their conclusion was that the cell-aware approach struck a better balance.

What they didn’t really discuss is how this all works. So here we’re going to delve into the details a bit more, even deigning to run through a simple specific example (courtesy of Mentor).

Let’s start by baselining with some review. A stuck-at fault model is intended to prove that the individual nets in the circuit aren’t stuck high or stuck low. If you have a three-input AND gate and you want to prove that the output isn’t stuck, you can use one vector with one or more inputs low to verify that the output isn’t stuck high, and you can use another vector with all inputs high to prove that the output isn’t stuck low.

While this is really simple and has been automated for years, it leaves out lots of other faults. After all, nets can be shorted – or “bridged” – to other nets, especially with today’s weird lithography artifacts. Or there might be an open in a line. And these shorts and opens can be “clean” or resistive. If they’re clean, then presumably you can detect some static logic problem. If resistive, then you’re more likely to see the circuit establish some stable state, but slowly, so you have to do a dynamic test.

But, given a generic gate, how do you get at these other faults? One approach has been the so-called gate-exhaustive model. In this case, you apply all combinations of inputs to the gates to assure that the output is correct for all cases. For the three-input AND gate, that would mean 23, or 8, vectors instead of the two we identified above.

But for a more illustrative example, let’s look at something a little more complex than a single AND gate: a 3-input mux. To an automatic test program generation (ATPG) tool, it may look logically like Figure 1:

Figure_1.png

Figure 1. Logical view of mux


From this logical representation, the ATPG tool can derive a set of vectors to cover the stuck-at faults, shown in Figure 2:

Figure_2.png

Figure 2. Vectors for stuck-at coverage

 

These vectors are derived from the gate-level primitives – and because of the staged two-input-muxes, they’re not as symmetric as you might expect. But this is just a logical view and involves no actual knowledge of what might realistically be bridged or open in an actual circuit.

That’s where the cell-aware concept comes in. The idea is to use the actual layout to figure out where the real issues might be and ensure that there are vectors covering those issues. Figure 3 shows the layout for the mux. And you can see that there’s one place where two yellow nets come close together, running the risk of bridging (in red). This is but one of many such fault candidates, but it’s the example we’ll follow.

Figure_3.png

Figure 3. Mux layout showing potential bridge (red)


So how do you automate a way to find such potential bridges – or opens, for that matter? It’s easy for us to do by inspection, but that won’t work in the real world. It turns out, however, that there’s a straightforward way to identify these points. In fact, you already do this: parasitic extraction.

The whole point of parasitic extraction is to identify a) items that are close enough to affect each other capacitively and b) runs of metal or poly that are resistive. Each of those capacitors is a potential bridge, and each of those resistors is a potential open.

So let’s go, first, to the full cell schematic, shown in Figure 4. Here you can see where the bridge in Figure 3 is, although the fact that those two lines are close to each other here is purely coincidental – in other cases, they could be nets that are adjacent in the layout, but more distantly related in the schematic.

Figure_4.png

Figure 4. Full cell schematic

 

Figure 5 shows a portion of the full schematic with parasitics included. I’ve isolated only that portion that’s relevant to this particular fault (and I’ve simplified it a bit). You can correlate the various capacitors and resistors with the layout in Figure 3. Each of those capacitors could bridge, and each of the resistors could be open. In our particular example, the capacitor that models the influence between Net B and Net D becomes a 1-Ω resistor (although you could also use a higher resistance to model a resistive short).

Figure_5.png

Figure 5. Portion of the schematic including parasitics. The bridge is a 1-Ω resistor.

 

Now the question becomes, what happens with this bridge? What goes wrong, and how can you detect it? And to figure that out, you simulate – at a low level. You have to simulate the real circuit because many times you will end up with two drivers fighting, and you have to figure out which will win. According to Mentor, it took 1500 individual simulations to cover this particular multiplexor cell.

For the specific fault shown above, the tool identified four different vectors that can detect this fault, shown in Figure 6.

Figure_6.png

Figure 6. Four vectors that can detect the fault.

 

Now, you might wonder, “why do I need four different ways to detect the fault?” In truth, in a final set of vectors, you don’t: you need only one.

But here’s the deal: all of this characterization and simulation is done once when building the cell library. Those four extra vectors are included as part of the kit.

Fast forward to an actual circuit design using this cell. An ATPG tool will go through and do its standard cell-unaware vector generation. When that’s done, it can go back into the cells to see what special cases need to be covered based on the actual cell layout. If you’re lucky – and your chances are good here – some vector will already be in place that provides the extra coverage, so you won’t need to add any more. That’s why having multiple possible vectors (four in this example) is useful: it increases the chances that you will be fortuitously covered. If it turns out you weren’t lucky, then the ATPG tool picks one of the target vectors to add to the test suite.

It’s because of this targeted addition of vectors – which may not even have to be added – that the cell-aware approach provides a better balance than the gate-exhaustive approach, which simply blasts vectors, many of which don’t do anything useful. Whether it meets the specific goal of both necessary and sufficient is harder to say, but anything closer to that goal is goodness.

13 thoughts on “Necessary and Sufficient?”

  1. Pingback: 123movies
  2. Pingback: soal cpns 2017
  3. Pingback: juegos friv
  4. Pingback: satta matka

Leave a Reply

featured blogs
Jul 29, 2021
Circuit checks enable you to analyze typical design problems, such as high impedance nodes, leakage paths between power supplies, timing errors, power issues, connectivity problems, or extreme rise... [[ Click on the title to access the full blog on the Cadence Community sit...
Jul 29, 2021
Learn why SoC emulation is the next frontier for power system optimization, helping chip designers shift power verification left in the SoC design flow. The post Why Wait Days for Results? The Next Frontier for Power Verification appeared first on From Silicon To Software....
Jul 28, 2021
Here's a sticky problem. What if the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries?...
Jul 9, 2021
Do you have questions about using the Linux OS with FPGAs? Intel is holding another 'Ask an Expert' session and the topic is 'Using Linux with Intel® SoC FPGAs.' Come and ask our experts about the various Linux OS options available to use with the integrated Arm Cortex proc...

featured video

Intelligent fall detection using TI mmWave radar sensors

Sponsored by Texas Instruments

Actively sense when a fall has occurred and take action such as sending out an alert in response. Our 60GHz antenna-on-package radar sensor (IWR6843AOP) is ideal for fall detection applications since it’s able to detect falls in large, indoor environments, can distinguish between a person sitting and falling, and utilizes a point cloud vs a person’s identifiable features, which allows the sensor to be used in areas where privacy is vital such as bathrooms and bedrooms.

Click here to explore the AOP evaluation module

featured paper

Hyperconnectivity and You: A Roadmap for the Consumer Experience

Sponsored by Cadence Design Systems

Will people’s views about hyperconnectivity and hyperscale computing affect requirements for your next system or IC design? Download the latest Cadence report for how consumers view hyperscale computing’s impact on cars, mobile devices, and health.

Click to read more

featured chalk talk

High-Performance Test to 70 GHz

Sponsored by Samtec

Today’s high-speed serial interfaces with PAM4 present serious challenges when it comes to test. Eval boards can end up huge, and signal integrity of the test point system is always a concern. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns of Samtec about the Bullseye test point system, which can maintain signal integrity up to 70 GHz with a compact test point footprint.

Click here for more information about Samtec’s Bulls Eye® Test System