With increasing size and complexity of ICs and limitations in traditional physical failure analysis tools, failure analysis engineers need help determining the root cause of a specific failing die. Yield engineers, on the other hand, need to be able to identify systematic yield limiters that may be disguised as random failures caused by complex interactions between the manufacturing process and specific design patterns. A failure diagnosis tool that provides high accuracy and resolution, as well as meaningful defect classifications, can be of high value to both engineers’ jobs.Today, most complex ICs are tested using built-in scan test logic and automatically generated test patterns. Scan logic diagnosis software correlates test pattern results with the failure source to identify the root cause of a failing die.
Significant improvements have been made in scan logic diagnosis [1–17]. Several usages of diagnosis tools in failure analysis and yield analysis have been reported [9–17]. However, the defect classifications, accuracy, and resolution provided by diagnosis tools generally have been insufficient. Typically, diagnosis software presents too many suspect failures to the engineer and provides imprecise information about the location of the defect. This can make determining the root cause a long and laborious process. One solution to this is problem is complementing traditional diagnosis tools with dedicated failure analysis techniques or other software tools [13–15].
Even with improvements, such as adding new defect types, using new logic-based analysis algorithms that remove defect candidates that are logically impossible, and targeting cells as a defect location, a logic-based diagnosis tool cannot assume which suspected defects are physically possible.
Scan logic diagnosis can speed up the process with much better results if the software integrates layout information to pinpoint the most feasible and actual location of a defect. By including layout data,
• accuracy and resolution improves,
• defect types can be validated against the layout, and
• defects can be located at the polygon level.
By using both the netlist and layout information, diagnosis software can determine which nets are neighbors on the die and which nets are far apart, eliminating bridge defect locations that may be logically sound but are impossible based on the actual layout [16, 17]. Similarly, layout information identifies the likelihood of multiple branches of a net failing simultaneously, further reducing false candidates [17, 18]. Layout data can provide x and y locations of library cell instances, polygon shapes, and locations for each polygon of a net, pin, or cell, and the layout hierarchy. Successful applications of diagnosis tools using layout information have been reported [16–17, 19–20].
Measure Diagnosis Quality: Accuracy and Resolution
The accuracy of a diagnosis tool can be determined by how well it has identified the true defective net(s) and the true defect type. For a given list of suspects, accuracy is a binary decision: Is the true defect on the list of suspects – yes or no? Any arbitrary cut-off point can limit the list. Such cut-off points can be, for example, the top 10 candidates or the 5 highest ranked candidates.
Layout data can be used to validate logical candidates against the physical possibilities. Logic-based diagnosis algorithms might identify a suspect – for example, a bridge – explaining an observed faulty behavior. But this suspect might not be possible based on the routing of the nets involved in this bridge.
Figure 1 shows a group of nets called out by a logic-only diagnosis tool. In this example, each of the nets is a valid candidate for a bridge, based on the simulated logic values. Thus, diagnosis results based on logic algorithms alone would include each net. The reported result would be more meaningful if the candidates could be narrowed down to a most likely net pair.
Figure 1: Group of nets called out by a logic-only diagnosis tool.
Resolution can be defined by the area of the defective location captured by a suspect, that is, the “bounding box” of a defect (which is typically not actually shaped like a box). For logic-based diagnosis tools, the resolution equals the combined area of all reported nets. Logic-based diagnosis tools do not know where two nets potentially bridge or where an open on a net might be located. Thus the whole net must be searched to find the defect.
Figure 2 shows the diagnosis result of the same case, but now layout-aware, using Mentor’s advanced logic diagnosis tool. All but one bridge suspect were removed from the suspect list. This one net pair is shown in green and yellow. In addition, the layout-aware diagnosis tool computes and reports the polygon of the bounding box of the defect location, pointing out in layout terms where the defect might happen. The bridge bounding box is shown in red.
Figure 2: Group of nets called out by a layout-aware diagnosis tool.
The resolution improved markedly, reducing the list of bridge suspects from six down to one. Even more significant is the reduction of the physical space that must be probed to verify the root cause.
Analysis on a large number of cases has shown that a layout-aware diagnosis tool is able to remove more than 85% of all logical bridge suspects, significantly improving the accuracy and resolution of identifying bridge defect suspects. For later reporting, the layout-aware diagnosis tool can compute the location (x/y/layer), distance, parallel run length, and the critical area for each bridge-defect bounding box and attach these physical properties as annotation. Similar physical properties can be computed for open-defect bounding boxes.
Distinguish a Dominant Bridge from an Open Defect
Logic-based diagnosis cannot find the aggressor net of a dominant bridge or distinguish between a dominant bridge and an open defect; layout-aware diagnosis can. Dominant bridges are bridge candidates, where only one net, the “victim,” shows defective behavior, while the second net, the “aggressor,” does not. Logic-based diagnosis tools are usually able to identify the victim net. In order to find the aggressor nets, the good and faulty logic values of the victim net have to be compared to the logic values of each other net, for all test patterns. Nets whose logic values match that of a bridge defect are aggressor net candidates. This is resource consuming and impractical for all but the smallest designs.
On the other hand, an open defect on the victim net could explain the observed faulty behavior as well. Not knowing if there is a matching aggressor net, logic-based diagnosis has no choice but to declare the victim net as both a bridge candidate and an open candidate, introducing ambiguity into the diagnosis result. Although logic-based diagnosis is able to identify the victim net, the correct defect type and, in the case of a bridge, the aggressor net information are out of reach. Layout-aware diagnosis is able to remove the ambiguity between a bridge and an open candidate by identifying all possible aggressor nets, and, if none are possible, leaving only the open defect candidates as the possible sources of the failure. The layout-aware diagnosis tool will report both candidates only for those cases where a dominant bridge and an open defect are possible, since both are layout-justifiable.
Identify Open Net Segments
Logic-based diagnosis is usually able to point out nets that potentially contain an open defect, thus establishing a certain level of accuracy. The resolution of the open suspect contains the whole net, i.e. the bounding box of the defect is the net itself. Nets are long, and open defect locations can be hard to find; thus, better resolution is required for open defects.
By incorporating layout information, the bounding box of the open defect on the found net can be determined down to a few polygons, if the net has two or more sink gates. In this case, the layout-aware analysis can also confirm whether or not the net is likely to have a single open defect, as suggested by the logic-based diagnosis, further improving the accuracy.
Figure 3 shows an identified net with one driving gate (D) and six sink gates. Three of the sink gates are failing (F) at least one of the test patterns and the three other sink gates are passing (P) every test pattern. Knowing the test results and the exact topology of the net, the layout-aware diagnosis tool can determine the segment of the net that contains the open defect. The segment marked in red is the bounding box of the open defect.
Figure 3: An open net segment identified in diagnosis with one driving gate (D) and six sink gates.
Three sink gates are failing (F) test, and three are passing (P) test.
Analysis on a large number of cases has shown that a layout-aware diagnosis tool is able to remove more than 70% of the net area from consideration, significantly improving the resolution for open suspects.
Better Analysis with Layout-Aware Diagnosis Reports
Combining layout data and logic diagnosis can improve accuracy, resolution, and defect classification, but these improvements are diminished if reporting is limited by the traditional practice of printing net names only. Reporting all the polygon information and additional physical properties, such as the location, critical area, and other attributes, for each of the suspects significantly increases the amount of useful data, particularly for a failure analysis engineer.
In addition to the usual text-based report, layout-aware diagnosis should report the polygon and bounding box data. This enables a direct interface to other tools; for example, for displaying the results in a layout viewer or to drive failure analysis equipment.
Improve Layout-Aware Diagnosis Flows
Various methods and flows are used to add layout-information to diagnosis results. Most flows use layout information as a filtering step after the actual diagnosis. Although helpful, this does not mine all the benefits available if layout information is used as an integral part of diagnosis.
The simplest traditional approach is to take the logic-based diagnosis result as is, translate the suspect nets into polygons, and save the layout information as an addendum to the logic-based report (Figure 4). In this case, the accuracy and resolution of the layout-aware report are the same as the logic-based report because it contains the same nets and nothing is filtered against the physical possibilities of the layout.
Figure 4: Basic diagnosis flow that provides only layout-aware reporting.
Another method is to use the well-known technique of layout-feature extraction to generate a list of all possible defect sites (Figure 5). The diagnosis run is purely logical, and the layout extraction delivers a complete list of conceivable layout defect candidates. A results parser merges these two data sets by filtering the logical diagnosis result against the extracted data and drops a logical defect candidate if it is not in the extracted list.
Figure 5: Pre-extraction flow that can be used to generate a list of potential defect sites.
Extraction can be resource intensive, both for execution time and size of the results. Extracting all defect possibilities, such as all line-of-sight bridges for all nets in the entire chip, is impractical. Thus, the extraction flow needs to limit the extraction parameters. If the extraction limits are tight and the bridge defect is not on the list, the results parser would drop the initial diagnosis bridge candidate, and the bridge defect would not be in the final layout-aware report. This result would indicate that no bridge is possible because the real defect was pruned from the list by an arbitrary extraction limit.
This extraction flow would also not be able to distinguish between a two-way bridge and a dominant bridge. Because it identifies all bridge candidates within the extraction limit, the results parser could find the aggressor net of a dominating bridge (if it is in the list). However, because the results parser only knows the net names, not the logical values at the two nets for each passing and failing pattern, it cannot determine if the dominating bridge is actually possible.
Finally, a pre-extraction flow does not improve the open defect case. Even if the topology for all conceivable open net candidates can be extracted, this would require copying all polygons from the layout file into the extraction list and processing them afterward. This adds another step to the flow. The job could be done more efficiently with a layout engine rather than having to use a results parser.
Replacing the pre-extraction component with a post-diagnosis layout engine that, on-demand, extracts all required data from the layout offers the potential of unlimited layout information on a more focused part of the layout. In particular, a pre-set extraction limit is not needed because only small sections of the layout are under investigation at one time.
Of course, layout engines vary in their capabilities. Basic ones might execute fast, crude layout computations to catch only the easy cases quickly. For example, the calculation required to determine that net polygons are no closer than a certain minimum distance is relatively easy. It will not catch shielding or other polygons issues, but this alone will identify about 80% of all physically impossible bridges. Layout engines at the next level operate as well as a state-of-the-art extraction engine to remove all physically impossible bridge candidates.
Such an advanced layout engine can determine if there is an unobstructed line-of-sight between any two nets and consequently reports all the usual bridge parameters. Because all layout based algorithms are applied only on a very small area of the layout, outlined by the logic engine, the costly time and space consumption of using unlimited extraction methods doesn’t have a large impact on the diagnosis process.
However, this post-diagnosis layout engine based flow still does not improve the open defect case. Although the topology can be extracted, there is no way back into the logic engine to determine that the passing and failing behavior of the sink gates match the topology since the logic diagnosis only reports the offending net, not the conditions causing the failure. Forwarding this information to the layout engine and consequently acting on it can be difficult. To move the layout engine to a more effective position in the flow, we have to integrate it with the logic diagnosis.
Integrate the Layout Engine into Diagnosis
The post-diagnosis layout engine flow can be improved significantly by moving the layout engine itself into the diagnosis tool and establishing a two-way communication channel between the layout engine and the logic engine (see Figure 6). This is a prerequisite for correct identification of dominant bridges and open net segments.
Figure 6: The diagnosis flow with a fully integrated layout engine.
To identify the aggressor net for a dominant bridge, the layout engine first computes a complete list of neighbor net polygons in any unobstructed line-of-sight for all polygons of the victim net. Then the logic engine validates this list of physically possible neighbor nets against the logical possibilities. All remaining nets are valid aggressor nets for the identified victim net. These aggressors are both logically and physically possible. If no net from the list remains, a bridge is not possible; thus, the defect might be an open. With this flow, removal of a bridge candidate from the final result means that there is no physical possibility of bridging, whereas the removal of a bridge candidate in the pre-extraction flow means only that there was no bridge extracted due to bridge extraction limitations.
Similarly, open net segments are computed by the logic engine based on topology information that is extracted on demand by the layout engine. The integrated communication channel between the layout and logic engines makes the diagnosis flow much more effective.
Layout data enables a diagnosis tool to provide defect classifications that are physically sound and to enhance reporting that includes net segments and defect bounding boxes. The most effective flow is based on a tight communication between the layout engine and the logic engine. This makes it possible for the tool to remove more than 85% of all bridge suspects, leaving only 15% of suspects that are both logically and physically possible. For opens, on average 70% of the area of the net is eliminated as not being capable of having a single open defect. When layout-aware diagnosis is based on an advanced implementation and a well thought-out flow, accuracy and resolution are improved, making it a powerful tool for both failure analysis and yield engineers.
1. Ruifeng Guo, Liyang Lai, Yu Huang, Wu-Tung Cheng, “Detection and Diagnosis of Static Scan Cell Internal Defect,” IEEE International Test Conference (ITC) 2008, Oct. 26-31, 2008, Paper: 17.2
2. Manish Sharma, Wu-Tung Cheng, Ting-Pu Tai, Y.S. Cheng, Will Hsu, Chen Liu, Sudhakar M. Reddy, Albert Man, “Faster Defect Localization in Nanometer Technology Based on Defective Cell Diagnosis,” IEEE International Test Conference (ITC) 2007, Oct. 21-26, 2007, Paper: 15.3
3. Yu Huang, Will Hsu, Yuan-Shih Chen, Wu-Tung Cheng, Ruifeng Guo, Albert Man, “Diagnose Compound Scan Chain and System Logic Defects,” IEEE International Test Conference (ITC) 2007, Oct. 21-26, 2007, Paper: 7.1
4. Wu Yang, Wu-Tung Cheng, Yu Huang, Martin Keim, Randy Klingenberg, “Scan Diagnosis and Its Successful Industrial Applications,” IEEE Asian Test Symposium (ATS) 2007, Oct. 8-11, 2007, Page(s): 215-215
5. Vishal J. Mehta, Malgorzata Marek-Sadowska, Kun-Han Tsai, Janusz Rajski, “Timing Defect Diagnosis in Presence of Crosstalk for Nanometer Technology,” International Test Conference (ITC) 2006, Oct. 22-27, 2006, Paper: 12.2
6. R. Desineni, O. Poku, R. D. Blanton, “A Logic Diagnosis Methodology for Improved Localization and Extraction of Accurate Defect Behavior,” International Test Conference (ITC) 2006, Oct. 22-27, 2006, Paper: 12.3
7. Huaxing Tang, Manish Sharma, Janusz Rajski, Martin Keim, Brady Benware, “Analyzing Volume Diagnosis Results with Statistical Learning for Yield Improvement,” European Test Symposium (ETS) 2007, May 20-24, 2007, Page(s):145-150
8. Wu-Tung Cheng, Kun-Han Tsai, Yu Huang, Nagesh Tamarapalli, Janusz Rajski, “Compactor Independent Direct Diagnosis,” IEEE Asian Test Symposium (ATS) 2004, Oct. 15-17 2004; Page(s): 204-209
9. Ray Talacka, Nandu Tendolkar, Cynthia Paquette, “Improving Yield using Scan and DFT based Analysis for High Performance PowerPC® Microprocessor,” International Symposium for Testing and Failure Analysis (ISTFA) 2006, Nov. 12-16, 2006, Page(s): 407-411
10. Chris Eddleman, Nagesh Tamarapalli, Wu-Tung Cheng, “Advanced Scan Diagnosis Based Fault Isolation and Defect Identification for Yield Learning,” International Symposium of Testing and Failure Analysis (ISTFA) 2005, Nov. 6-11, 2005, Page(s): 501-509
11. Andreas Leininger, Peter Muhmenthaler, Wu-Tung Cheng, Nagesh Tamarapalli, Wu Yang, Hans Tsai, “Compression Mode Diagnosis Enables High Volume Monitoring Diagnosis Flow,” IEEE International Test Conference (ITC) 2005, Nov. 6-11, 2005, Paper: 7.3
12. Christian Burmer, Andreas Leininger, Hans-Peter Erb, Markus Gruetzner, Thomas Schwemboeck, Stefan Trost, “Statistical Evaluation of Scan Test Diagnosis Results for Yield Enhancement of Logic Designs,” International Symposium of Testing and Failure Analysis (ISTFA) 2005, Nov. 6-11, 2005, Page(s): 395-400
13. M. Enamul Amyeen, Debashis Nayak, Srikanth Venkataraman, “Improving Precision Using Mixed-level Fault Diagnosis,” International Test Conference (ITC) 2006, Oct. 22-27, 2006, Paper: 22.3
14. Chia Ling Kong, Mohammed R. Islam, “Diagnosis of Multiple Scan Chain Faults,” International Symposium of Testing and Failure Analysis (ISTFA) 2005, Nov. 6-11, 2005, Page(s): 510-516
15. Deepa Gopu, George Ontko, Chin Phan, Kartik Ramanujachar, Scott Wills, Alan Hales, “Precise Fail site Isolation using a combination of Global, Software and Tester based Isolation Techniques,” International Symposium for Testing and Failure Analysis (ISTFA) 2004, Nov. 14-18, 2004, Page(s): 172-175
16. Dan Bodoh, Anthony Blakely, Terry Garyet, “Diagnostic Fault Simulation for the Failure Analyst,” International Symposium for Testing and Failure Analysis (ISTFA) 2004, Nov. 14-18, 2004, Page(s): 181-189
17. Camelia Hora, Stefan Eichenberger, “Towards High Accuracy Fault Diagnosis of Digital Circuits,” International Symposium for Testing and Failure Analysis (ISTFA) 2004, Nov. 14-18, 2004, Page(s): 47-51
18. Chen Liu, Wei Zou, Sudhakar M. Reddy, Wu-Tung Cheng, Manish Sharma, Huaxing Tang, “Interconnect open defect diagnosis with minimal physical information,” IEEE International Test Conference (ITC) 2007, Oct. 21-26, 2007, Paper: 7.3
19. Manish Sharma, Brady Benware, Lei Ling, David Abercrombie, Lincoln Lee, Martin Keim, Huaxing Tang, Wu-Tung Cheng, Ting-Pu Tai, Yi-Jung Chang, Reinhart Lin, Albert Man, “Efficiently Performing Yield Enhancements by Identifying Dominant Physical Root Cause from Test Fail Data,” IEEE International Test Conference (ITC) 2008, Oct. 26-31, 2008, Paper: 14.3
20. Jayanth Mekkoth, Murali Krishna, Jun Qian, Will Hsu, Chien-Hui Chen, Yuan-Shih Chen, Nagesh Tamarapalli, Wu-Tung Cheng, Jan Tofte, Martin Keim, “Yield Learning with Layout-aware Advanced Scan Diagnosis,” International Symposium of Testing and Failure Analysis (ISTFA) 2006, Nov. 12-16, 2006, Page(s): 412-418
About the Author
Dr. Martin Keim joined the Design-for-Test Division at Mentor Graphics Corporation, Wilsonville, Oregon, in 2001, starting as a software engineer in the ATPG and then in the diagnosis group. In 2007, he became a Technical Marketing Engineer for the yield and diagnosis products. Before Dr. Keim joined Mentor Graphics, he was with Infineon Technologies, Munich, Germany, as a test engineer for embedded memory products. He received his Ph.D. from the Albert-Ludwigs University in Freiburg im Breisgau, Germany, in 2003.