feature article
Subscribe Now

A Clean-By-Construction Deck

Sage Introduces a DRC Compiler

So you’ve spent many months on a chip design project that’s winding down. Layout is done, or at least you hope it’s done, and it’s time to make sure that you did it right. (Yes, I know… most designs today will involve many people doing different things, so the “you” here is intended to refer collectively to all of you.)

That means it’s time to run your design rule checks (DRCs). And as the computer hums away on your DRC deck (presumably so-called because at one time it was a deck of Hollerith cards), inspecting every nook and cranny (you hope) for violations, did you ever wonder where that deck came from?

It’s easy to assume that there is this methodical process by which some committee of technology dudes (“dude” in the gender-neutral sense) sat around planning a technology and a set of rules to live by, which they then codified that into a DRC deck and pronounced it good.

Well, to hear a new startup, Sage, describe it, seeing where your DRC deck comes from is probably more like watching sausage get made. It may not be something you want to witness. And, in fact, with the size of the decks these days, there’s got to be a veritable sausage-fest going on in the technology houses.

It’s not a matter of codifying knowledge into specific, executable DRC rules. And here’s where we have to define some nuances more closely. I’ve always thought of DRC decks consisting primarily of rules that can be tested as pass or fail. But it’s not quite that simple. Yes, there is a description of rules, but then there’s a lot of specific code that goes into how you identify violations on-screen.

You might think that a layout program would have a layer for the layout graphics and then a separate “signaling” layer for issues, but apparently not. The DRC rule code literally instantiates polygons that point out to you the location of a violation. And it’s up to the coder to decide how best to represent that violation.

Such a rule has to be manually written and coded. Each one. One after another.

And where do the guys writing these rules get their input? From, say, a spreadsheet that carefully describes, in mathematical terms, the rule and any parameters or conditions and specifically how it should be represented on screen? Nope. They get it from a document. A written document called the Design Rule Manual (DRM… no, not that DRM).

The DRM has text descriptions of the rules. The DRC coder reads those descriptions and then decides how to devise a test for them. That means interpreting the intent of the rule correctly (there may be ambiguities). The figuring-out-how-to-test-it part may also not be trivial, and there are cases where the exact rule is too hard to codify and an approximation must do.

So here we have two levels of interpretation. The first is when the DRM author interprets a rule and casts that interpretation into prose, and the second is when the poor guy writing the code reads that prose and decides how to turn that into actual DRC deck code.

This might not be so bad if there weren’t so many rules. In fact, we’ve lived with it for a long time, and it seems to have been manageable as long as the number of rules remained in the few hundreds or so. But those days are gone. Here are some numbers Sage puts forth: a modern DRM is something like a 600-page document containing about 5,000 rules. Those rules will turn into about 100,000 lines of DRC code. All manually written. Text and code.

As a result, it can be an incredible chore to validate the DRC code against the DRM to figure out if all of the rules got covered without escape loopholes and without tightening down too far. Which means that, when the decks are first released, they typically have bugs in them, and it can take years before you can really say that they’re clean.

So Sage is proposing a design rule compiler. OK, more than proposing: they’ve built one and recently introduced it. They call it iDRM. The way it works is by allowing graphical entry of a rule, along with conditions and other complicating factors. The tool will then create an executable check.

From that rule, iDRM can then create test layouts that should pass and fail. This can be used to test the rules by running them against the test structures. They can also take existing layouts and check them against the rules, listing all violations.

The idea here is to skip the whole writing-a-War-and-Peace-tome thing and use this tool instead to codify the rules. You then get a one-to-one correspondence between each rule and a correct-by-construction executable version of the rule.

The guy using this can take advantage of the test structures and layout scans to confirm whether he or she defined the rule properly. If there are too many misses (false negatives), then you can see right then and there that you need to tighten up the rule. If you’re getting too many false positives, then you need to find some way to express the intent of the rule that’s not so restrictive.

The part that’s not done yet is taking those rules and translating them into specific code that can be run by third-party DRC engines like Calibre, IC Validator, or Assura. Sage calls that DRC Synthesis. That’s an obvious next step in providing a complete process-concept-to-runnable-DRC-deck flow. But the focus right now is on getting engineers in various parts of the silicon ecosystem started using iDRM to create the rules. There’s plenty of work to do at that stage. Which presumably gives Sage more breathing space to get the last bit done.

In case any of you are thinking that this is a foundry-only tool, that’s not necessarily the case. Yes, the initial process design kit (PDK) will be developed there, and they will probably constitute the bulk of the users. But fabless houses often supplement the basic PDK with rules of their own, so they would also be able to leverage iDRM to do that. Failure analysis folks may also want to play with it in order to resolve failures or yield issues that may turn out to be related to layout.

As for the rest of you, well, if it all works as promised, you just might sleep a bit better knowing that the deck you’re working with is clean. Or cleaner than usual, anyway. And if you want to go see how that deck got made, well, it might not seem quite as gross anymore.

 

More info:

Sage DA

One thought on “A Clean-By-Construction Deck”

Leave a Reply

featured blogs
Nov 25, 2020
It constantly amazes me how there are always multiple ways of doing things. The problem is that sometimes it'€™s hard to decide which option is best....
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...
Nov 25, 2020
It might seem simple, but database units and accuracy directly relate to the artwork generated, and it is possible to misunderstand the artwork format as it relates to the board setup. Thirty years... [[ Click on the title to access the full blog on the Cadence Community sit...
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...

featured video

Available DesignWare MIPI D-PHY IP for 22-nm Process

Sponsored by Synopsys

This video describes the advantages of Synopsys' MIPI D-PHY IP for 22-nm process, available in RX, TX, bidirectional mode, 2 and 4 lanes, operating at 10 Gbps. The IP is ideal for IoT, automotive, and AI Edge applications.

Click here for more information about DesignWare MIPI IP Solutions

featured paper

Overcoming PPA and Productivity Challenges of New Age ICs with Mixed Placement Innovation

Sponsored by Cadence Design Systems

With the increase in the number of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan using manual methods, directly impacting tapeout schedules and power, performance, and area (PPA). In this white paper, learn how a breakthrough technology addresses design productivity along with design quality improvements for macro-dominated designs. Download white paper.

Click here to download the whitepaper

Featured Chalk Talk

Selecting the Right MOSFET: BLDC Motor Control in Battery Applications

Sponsored by Mouser Electronics and Nexperia

An increasing number of applications today rely on brushless motors, and that means we need smooth, efficient motor control. Choosing the right MOSFET can have a significant impact on the performance of your design. In this episode of Chalk Talk, Amelia Dalton chats with Tom Wolf of Nexperia about MOSFET requirements for brushless motor control, and how to chooser the right MOSFET for your design.

More information about Nexperia PSMN N-Channel MOSFETs