feature article
Subscribe Now

Confronting Manufacturing Closure at 32nm and Below

The problems of managing manufacturing variability, which began at the 45nm node, grow even more severe with the move to 32nm and could now pose a significant threat to design schedules, IC performance, and the bottom line. As the features get smaller, but the light wavelength used in lithography stays the same, the significant variations in line width, thickness, etc., have a greater effect on the yield and performance of ICs. Addressing this process-design gap – the gap between the feature size, 32nm, and the light used to etch those features, 193nm – is the fundamental purpose of today’s numerous and complex design rules. 

20091117_mentor_fig1.jpg

Figure 1. The gap between the wavelength of light used for lithography and the feature resolution required. This gap is responsible for the growing number and complexity of design and manufacturing rules that the design must adhere to.

The margins of error are now so narrow for manufacturing concerns that design rule check (DRC) and design-for-manufacturing (DFM) problems cannot be properly fixed in post-layout processing alone, but must be addressed earlier in the design flow. The lack of advanced DRC/DFM models during design implementation, plus the sheer number and complexity of DRC/DFM rules increases the time needed to achieve physical design closure for advanced node designs.

Rules of the 32/28nm Game

The number of DRC / DFM rules for 32 nm has easily doubled since the 90-nm node, and the rule complexity, measured by the number of operations required to verify the rules, has grown even faster (Figure 2). 

 

20091117_mentor_fig2.jpg

Figure 2. The number of design rules, and the complexity of the rules, has risen steadily at each process node, leading to growing problems in design closure.

Traditionally, DRC has involved relatively simple geometric measurements of physical features that are very close to each other. In 32-nm designs, however, many issues can only be understood by evaluating the influence of many geometries at relatively greater distances. Traditional rule-based DRC is very flexible and fast to run but is relegated to simpler issues. Simulators can handle specific complex issues but are not as flexible and are CPU intensive. So there is a gap in the solution space between these two. 

Major foundries now require that their customers meet some DFM requirements, such as lithographic and planarity checking and compensation. At previous nodes, DRC/DFM closure involved significant processing after the design phase was completed. However, because the layout is largely fixed at that point, designers are severely limited in what changes can be made without causing unacceptable changes to the timing, SI, or power profiles. For example, adding metal fill or pushing wire edges to meet lithography requirements can cause degradation to timing and SI as the parasitic interactions change. 

The common work-around to this disconnect between design and sign-off is to build in larger timing margins, but the hit to the IC’s performance with this approach is unacceptable because it discounts the advantage of going to the next node in the first place.

Disconnect Between Design and Physical Sign-off 

When manufacturing fixes require more extensive changes to the layout, designers must perform time-consuming iterations between sign-off analysis, manual editing, and engineering change order (ECO) routing. This can introduce even more, and unexpected, violations that lead to lost performance, poor manufacturability, and declines in productivity as design teams chase non-convergent fixes for conflicting design metrics.

The sign-off ECO iterations also involve transfers of huge data files between implementation and verification tools. Not only is there significant time required to write and read data files in the order of hundreds of gigabytes, but potential errors in the process introduce even more risk into the iterations.

These factors point to a major disconnect between the design and sign-off stages, which could become even more problematic as designers incorporate advanced lithographical techniques, such as pattern decomposition for double patterning, at the 22-nm node. The growing problem with manufacturing design closure should be addressed starting in design implementation, well before the traditional sign-off verification stage. 

A Better Flow for 32-nm Manufacturing Closure

Addressing the gap between design and sign-off demands improvements through a robust, evolutionary advancement in tool capabilities. The ability to capture manufacturing behaviors during design implementation would allow designers to analyze and optimize the impact of layout changes in context of one another in an efficient, automated fashion throughout design and sign-off. 

During design implementation, the routing, extraction, and timing functions must comprehend all 32-nm routing rules, such as the complex “end of line” rules illustrated in Figure 3. 

20091117_mentor_fig3.jpg

Figure 3. Implementation tools must obey all the complex DRC/DFM rules required for 32/28-nm manufacturability. The End of Line rules, shown here, are an example of advanced routing rules that must be met.

In addition to the DRC rules that ensure the IC will function, a number of advanced DFM checks should also be analyzed and fixed during place-and-route. Among these are critical area analysis (CAA), litho process checking (LPC) and three-dimensional (3D) variability (or planarity) analysis. CAA reduces the probability of random defects from particulates in the manufacturing process by maximizing the space between interconnects without increasing overall die size. This involves making adjustments to wire locations to optimize the use of “white space” in the layout, which should be performed in a timing-aware context.

LPC identifies specific locations where distortions will likely cause a defect, such as a short or open, or a potential timing issue due to altered device or interconnect shapes. This can also be modeled in the implementation stage when the design is more flexible. The key here is to have accurate models without the high runtime required for full sign-off LPC.

Improvements to planarity could be achieved before sign-off verification through incremental, timing-driven metal fill insertion. The metal fill results in a more uniform thickness after chemical-mechanical polishing (CMP), which can reduce variations in interconnect resistance. But additional metal fill also introduces more parasitic capacitance. These effects must be analyzed in context of all the other design metrics—such as timing, SI, power, and area—and across all process corners, modes, and power states. A good solution to look for is “smart” metal fill, which performs analysis of multiple design objectives during the fill operation to minimize the total amount of fill required to meet DFM objectives.

All the manufacturing-related optimizations performed during design must have excellent correlation with the models employed during final sign-off verification. When all the tools in a design flow have consistent information throughout design creation, implementation, and manufacturing, the time required to achive sign-off should remain manageable even as closure becomes more complicated. 

Minimize Post-GDSII Manipulations

To maintain expected time to closure, designers need to resolve as many DFM issues as possible before the extensive post-GDSII sign-off analysis. Many DFM corrections made after is the design is complete are suboptimal at best. Designers can minimize the time-consuming sign-off iterations by relying more on DFM prevention and repair techniques available in place-and-route. This is where the contextual impact of layout features can be assessed and resolved with enough flexibility to improve all design metrics concurrently. Prevention and optimization within place-and-route may not completely eliminate the need for sign-off DFM, but it should improve the closure process significantly. 

To this end, design tools should be able to use new DRC/DFM techniques, such as equation-based DRC (Figure 4) and pattern matching. 

20091117_mentor_fig4.jpg

Figure 4. Equation-based DRC bridges the gap between simple, rule-based DRC and the more time-consuming process simulation.

Equation-based DRC allows the DRC engine to consider more geometric influences at greater distances, and then calculate the mathematical relationships between them to determine the potential impact of a variety of manufacturing issues. It combines the speed and flexibility of traditional DRC, with the ability to handle much more complex manufacturing effects that could previously be verified only with simulators. 

Pattern matching allows DRC analysis to use a library of visual patterns, rather than a scripting language that describes relationships between geometries. Pattern matching is an extension of DRC for checks with very high levels of complexity, and simplifies the problem of communication between design and manufacturing.

Conclusion 

The demands of manufacturing variability, in the form of complex DRCs and DFM enhancements, exceed traditional tool capabilities. EDA vendors must respond with new place-and-route tools built from the ground up to handle these new challenges. These tools must support advanced DRC/DFM closure techniques to generate designs that are optimized for all the factors that ensure competitive, manufacturable, high-yield nanometer IC products.


20091117_mentor_sjilla.jpgAbout the Author: Sudhakar Jilla is the marketing director at Mentor Graphics. Over the past 15 years, he has held various application engineering, marketing, and management roles in the EDA industry. He holds a Bachelors degree in Electronics and Communications from University of Mysore, a Master’s degree in Electrical Engineering from the University of Hawaii, and a MBA from the Leavey School of Business, Santa Clara University.

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Introduction to the i.MX 93 Applications Processor Family
Robust security, insured product longevity, and low power consumption are critical design considerations of edge computing applications. In this episode of Chalk Talk, Amelia Dalton chats with Srikanth Jagannathan from NXP about the benefits of the i.MX 93 application processor family from NXP can bring to your next edge computing application. They investigate the details of the edgelock secure enclave, the energy flex architecture and arm Cortex-A55 core of this solution, and how they can help you launch your next edge computing design.
Oct 23, 2023
23,619 views