feature article
Subscribe Now

A Clean-By-Construction Deck

Sage Introduces a DRC Compiler

So you’ve spent many months on a chip design project that’s winding down. Layout is done, or at least you hope it’s done, and it’s time to make sure that you did it right. (Yes, I know… most designs today will involve many people doing different things, so the “you” here is intended to refer collectively to all of you.)

That means it’s time to run your design rule checks (DRCs). And as the computer hums away on your DRC deck (presumably so-called because at one time it was a deck of Hollerith cards), inspecting every nook and cranny (you hope) for violations, did you ever wonder where that deck came from?

It’s easy to assume that there is this methodical process by which some committee of technology dudes (“dude” in the gender-neutral sense) sat around planning a technology and a set of rules to live by, which they then codified that into a DRC deck and pronounced it good.

Well, to hear a new startup, Sage, describe it, seeing where your DRC deck comes from is probably more like watching sausage get made. It may not be something you want to witness. And, in fact, with the size of the decks these days, there’s got to be a veritable sausage-fest going on in the technology houses.

It’s not a matter of codifying knowledge into specific, executable DRC rules. And here’s where we have to define some nuances more closely. I’ve always thought of DRC decks consisting primarily of rules that can be tested as pass or fail. But it’s not quite that simple. Yes, there is a description of rules, but then there’s a lot of specific code that goes into how you identify violations on-screen.

You might think that a layout program would have a layer for the layout graphics and then a separate “signaling” layer for issues, but apparently not. The DRC rule code literally instantiates polygons that point out to you the location of a violation. And it’s up to the coder to decide how best to represent that violation.

Such a rule has to be manually written and coded. Each one. One after another.

And where do the guys writing these rules get their input? From, say, a spreadsheet that carefully describes, in mathematical terms, the rule and any parameters or conditions and specifically how it should be represented on screen? Nope. They get it from a document. A written document called the Design Rule Manual (DRM… no, not that DRM).

The DRM has text descriptions of the rules. The DRC coder reads those descriptions and then decides how to devise a test for them. That means interpreting the intent of the rule correctly (there may be ambiguities). The figuring-out-how-to-test-it part may also not be trivial, and there are cases where the exact rule is too hard to codify and an approximation must do.

So here we have two levels of interpretation. The first is when the DRM author interprets a rule and casts that interpretation into prose, and the second is when the poor guy writing the code reads that prose and decides how to turn that into actual DRC deck code.

This might not be so bad if there weren’t so many rules. In fact, we’ve lived with it for a long time, and it seems to have been manageable as long as the number of rules remained in the few hundreds or so. But those days are gone. Here are some numbers Sage puts forth: a modern DRM is something like a 600-page document containing about 5,000 rules. Those rules will turn into about 100,000 lines of DRC code. All manually written. Text and code.

As a result, it can be an incredible chore to validate the DRC code against the DRM to figure out if all of the rules got covered without escape loopholes and without tightening down too far. Which means that, when the decks are first released, they typically have bugs in them, and it can take years before you can really say that they’re clean.

So Sage is proposing a design rule compiler. OK, more than proposing: they’ve built one and recently introduced it. They call it iDRM. The way it works is by allowing graphical entry of a rule, along with conditions and other complicating factors. The tool will then create an executable check.

From that rule, iDRM can then create test layouts that should pass and fail. This can be used to test the rules by running them against the test structures. They can also take existing layouts and check them against the rules, listing all violations.

The idea here is to skip the whole writing-a-War-and-Peace-tome thing and use this tool instead to codify the rules. You then get a one-to-one correspondence between each rule and a correct-by-construction executable version of the rule.

The guy using this can take advantage of the test structures and layout scans to confirm whether he or she defined the rule properly. If there are too many misses (false negatives), then you can see right then and there that you need to tighten up the rule. If you’re getting too many false positives, then you need to find some way to express the intent of the rule that’s not so restrictive.

The part that’s not done yet is taking those rules and translating them into specific code that can be run by third-party DRC engines like Calibre, IC Validator, or Assura. Sage calls that DRC Synthesis. That’s an obvious next step in providing a complete process-concept-to-runnable-DRC-deck flow. But the focus right now is on getting engineers in various parts of the silicon ecosystem started using iDRM to create the rules. There’s plenty of work to do at that stage. Which presumably gives Sage more breathing space to get the last bit done.

In case any of you are thinking that this is a foundry-only tool, that’s not necessarily the case. Yes, the initial process design kit (PDK) will be developed there, and they will probably constitute the bulk of the users. But fabless houses often supplement the basic PDK with rules of their own, so they would also be able to leverage iDRM to do that. Failure analysis folks may also want to play with it in order to resolve failures or yield issues that may turn out to be related to layout.

As for the rest of you, well, if it all works as promised, you just might sleep a bit better knowing that the deck you’re working with is clean. Or cleaner than usual, anyway. And if you want to go see how that deck got made, well, it might not seem quite as gross anymore.

 

More info:

Sage DA

One thought on “A Clean-By-Construction Deck”

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
23,058 views