feature article
Subscribe Now

Silicon Lineup

Pattern Matching Helps Identify Silicon Suspects

There’s a pretty simple bottom line when designing an IC. Yes, there are functionality requirements, cost goals, performance targets, and any number of must-haves, all of which must be weighed, adjusted, and satisfied. But, given the millions of dollars it costs to buy a set of masks, at the end of it all is one overall abiding mandate: Don’t Screw Up. (That’s the PG version.)

And of course, courtesy of our mushrooming technology complexity, we have at our disposal increasingly diverse ways of screwing up. And none are more daunting than the burgeoning number of restrictions being placed on physical layout.

We’ve always had rules to follow. Back in the day when our ancestral forebears actually had to cut rubylith to create a layout, you still had to make sure that your dimensions conformed to the lithographic capabilities of the day. The thing is, there were a handful of rules to meet. Don’t make things too narrow. Don’t space things too closely together. That sort of thing.

As we’ve shrunk the technology, the number rules has multiplied, but there’s one fundamental change that has accelerated that proliferation: the reduction of dimensions below the wavelength of the light used to expose the mask patterns onto the wafer. Diffraction effects have consequences that, at first glance, make no sense. The constraints placed on one feature can be affected by something a few features away. It’s no longer good enough to play nicely with your neighbor; you have to take the entire neighborhood into account. Strange wind patterns can take the tantalizing scent of your outdoor meatfest, lift it over the house next door, and deliver it to the staunch militant vegans two doors down.

The tried-and-true methodology for checking layout conformance is design-rule checking, or DRC. A designer or technologist establishes a list of rules that must be met and codes it into a “deck” for use by a DRC program to use in grading your layout efforts. (The quaint old notion of a “deck” immediately brings to mind the unforgettable sound of the old card reader in the basement of Evans Hall spitting through a deck of punchcards at 2 AM as an all-nighter got underway…)

Which means that the program has to go through the entire chip, checking every rule. Which takes longer and longer as the chips get bigger. And which takes longer upon longer when you start adding rules to the bigger chips. And the number of rules has exploded from the tens to the hundreds and even thousands for upcoming technology nodes. Especially when taking into account recommended rules.

Some relief has been afforded by “equation-based” rules (not to be confused with models). These essentially allow you to parameterize elements of a rule so that, instead of having a number of versions of the same rule to handle different circumstances, you can build those circumstances into the equation and fold them all into a single equation-based rule. Mentor’s Michael White has some numbers showing an example of how a series of simple rules taking a total of 54 operations can be replaced with an equation-based rule requiring three operations. Not only is the deck simpler, it also runs faster.

This has helped, but it is still inadequate going forward. Some geometries are very hard to describe. Let’s say your neighborhood has restrictions on the use and placement of geodesic domes. Articulating that rule based on a mathematical description defining the characteristics of a geodesic dome would not be fun. But if all you have at your disposal are equation-based rules, you have to do the best you can.

One development can theoretically help: the use of models for simulating the lithographic effects. These make use of the underlying physics to model the interaction of light and the mask. The problem is that its accuracy is balanced by long run times – like a hundred times slower. It’s very useful for foundries to use when developing the set of rules for a new technology. But you do the simulations once to get the rules and then send the rules out to designers to use for all their designs. It would take forever to do full-chip simulations for sign-off. And simulation results are generally designed to be reviewed quantitatively; they’re not so much used on a pass/fail basis as is needed to bless a design.

So even though model-based simulation can provide the accuracy needed, it’s not practical as a design tool. In fact, most of the EDA guys see litho simulation as a foundry tool. And so a new approach is being readied to help control the rule craziness for actual designers: pattern matching. Rather than defining rules, you describe situations that can be problematic. Rather than trying to describe a geodesic dome mathematically, just say, “Houses that look like this are at high risk for toxic levels of patchouli.”

By incorporating patterns, rules can consist of a description of scenarios that pose a threat to yield. Not only does this cover more cases than can reasonably be done with equation-based rules, but these can also execute more quickly. Instances can be flagged for review and editing; it remains to be seen whether this will be linked with layout tools for automated fixing of problems.

The kinds of problems being flagged in these checks aren’t always cut-and-dried. You’re managing yield – you may not have a failure on your hands, but, from lot to lot or wafer to wafer – or even across a wafer or die, a particular configuration may yield poorly or inconsistently. Your ability (or the time available) to fix it may be limited. So patterns can be built with a certain amount of “fuzziness,” meaning that a greater variety of variations of a given pattern can be described in one pattern.

This flexibility comes with a cost, however. Such a flexible rule casts a wide net and may catch more things than you’re looking for. There are a couple of ways to filter the results. You might be able to filter as you run the rule, specifying how tightly or loosely to run the rule. Another use model might be to run the rule, which would generate a number of possible problem candidates; running model-based simulation on those candidates then further isolates those that are truly problems and need to be fixed.

Patterns are also a convenient way for foundries to protect their IP. Full simulation models may reveal more about the technology than they would like to disclose. Just as they do with rules, they can use the simulations in-house to define the patterns; making the patterns available instead of the full models thus hides the process details underlying the pattern.

Of course, it could also be possible for users to define their own patterns, although the vendors appear mixed on how they view the utility of this. As tools start to become available, it’s likely that some will designate foundries for defining patterns, while some will give that capability to designers as well.

As to when this will become available, there’s one initial surprising fact: Magma already ships pattern-matching software, but it’s for a different use model. It’s very helpful for failure analysis engineers so that, upon finding the root cause of a failure, they can define the pattern and check the rest of the die to see if there is perhaps another instance lurking, as yet unexpressed. It’s not, however, available from them yet for chip sign-off.

In fact, the vendors seem to be divided on whether pattern-matching will even be suitable for sign-off on its own. Cadence and Mentor talk in terms of including rules and simulations for sign-off, with pattern-matching being a time-saver to identify the spots to verify. Synopsys isn’t so keen on modeling in the sign-off loop because it doesn’t do so well in a pass/fail environment.

Pattern matching is often linked to double-patterning since it can be used to help identify stitching situations. And double-patterning is still a few nodes away. Both Synopsys and Magma talk about pattern-matching showing up around the 22-nm node as an adjunct to DRC. No one is talking publicly about specific availability dates yet.

What is clear is that there is much active work ongoing; this is not simply a gleam in developers’ eyes. Sooner or later it’s pretty much guaranteed that you’ll have one more tool available to keep you from screwing up.

Leave a Reply

featured blogs
Mar 18, 2024
Innovation in the AI and supercomputing domains is proceeding at a rapid pace, with each new advancement heralding a future more tightly interwoven with the threads of intelligence and computation. Cadence, with the release of its Millennium Platform, co-optimized with NVIDIA...
Mar 18, 2024
Cloud-based EDA tools are critical to accelerating AI chip design and verification; see how NeuReality leveraged cloud-based chip emulation for their 7NR1 NAPU.The post NeuReality Accelerates 7nm AI Chip Tape-Out with Cloud-Based Emulation appeared first on Chip Design....
Mar 5, 2024
Those clever chaps and chapesses at SiTime recently posted a blog: "Decoding Time: Why Leap Years Are Essential for Precision"...

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured paper

Reduce 3D IC design complexity with early package assembly verification

Sponsored by Siemens Digital Industries Software

Uncover the unique challenges, along with the latest Calibre verification solutions, for 3D IC design in this new technical paper. As 2.5D and 3D ICs redefine the possibilities of semiconductor design, discover how Siemens is leading the way in verifying complex multi-dimensional systems, while shifting verification left to do so earlier in the design process.

Click here to read more

featured chalk talk

Electromagnetic Compatibility (EMC) Gasket Design Considerations
Electromagnetic interference can cause a variety of costly issues and can be avoided with a robust EMI shielding solution. In this episode of Chalk Talk, Amelia Dalton chats with Sam Robinson from TE Connectivity about the role that EMC gaskets play in EMI shielding, how compression can affect EMI shielding, and how TE Connectivity can help you solve your EMI shielding needs in your next design.
Aug 30, 2023
24,968 views