feature article
Subscribe Now

A Clean-By-Construction Deck

Sage Introduces a DRC Compiler

So you’ve spent many months on a chip design project that’s winding down. Layout is done, or at least you hope it’s done, and it’s time to make sure that you did it right. (Yes, I know… most designs today will involve many people doing different things, so the “you” here is intended to refer collectively to all of you.)

That means it’s time to run your design rule checks (DRCs). And as the computer hums away on your DRC deck (presumably so-called because at one time it was a deck of Hollerith cards), inspecting every nook and cranny (you hope) for violations, did you ever wonder where that deck came from?

It’s easy to assume that there is this methodical process by which some committee of technology dudes (“dude” in the gender-neutral sense) sat around planning a technology and a set of rules to live by, which they then codified that into a DRC deck and pronounced it good.

Well, to hear a new startup, Sage, describe it, seeing where your DRC deck comes from is probably more like watching sausage get made. It may not be something you want to witness. And, in fact, with the size of the decks these days, there’s got to be a veritable sausage-fest going on in the technology houses.

It’s not a matter of codifying knowledge into specific, executable DRC rules. And here’s where we have to define some nuances more closely. I’ve always thought of DRC decks consisting primarily of rules that can be tested as pass or fail. But it’s not quite that simple. Yes, there is a description of rules, but then there’s a lot of specific code that goes into how you identify violations on-screen.

You might think that a layout program would have a layer for the layout graphics and then a separate “signaling” layer for issues, but apparently not. The DRC rule code literally instantiates polygons that point out to you the location of a violation. And it’s up to the coder to decide how best to represent that violation.

Such a rule has to be manually written and coded. Each one. One after another.

And where do the guys writing these rules get their input? From, say, a spreadsheet that carefully describes, in mathematical terms, the rule and any parameters or conditions and specifically how it should be represented on screen? Nope. They get it from a document. A written document called the Design Rule Manual (DRM… no, not that DRM).

The DRM has text descriptions of the rules. The DRC coder reads those descriptions and then decides how to devise a test for them. That means interpreting the intent of the rule correctly (there may be ambiguities). The figuring-out-how-to-test-it part may also not be trivial, and there are cases where the exact rule is too hard to codify and an approximation must do.

So here we have two levels of interpretation. The first is when the DRM author interprets a rule and casts that interpretation into prose, and the second is when the poor guy writing the code reads that prose and decides how to turn that into actual DRC deck code.

This might not be so bad if there weren’t so many rules. In fact, we’ve lived with it for a long time, and it seems to have been manageable as long as the number of rules remained in the few hundreds or so. But those days are gone. Here are some numbers Sage puts forth: a modern DRM is something like a 600-page document containing about 5,000 rules. Those rules will turn into about 100,000 lines of DRC code. All manually written. Text and code.

As a result, it can be an incredible chore to validate the DRC code against the DRM to figure out if all of the rules got covered without escape loopholes and without tightening down too far. Which means that, when the decks are first released, they typically have bugs in them, and it can take years before you can really say that they’re clean.

So Sage is proposing a design rule compiler. OK, more than proposing: they’ve built one and recently introduced it. They call it iDRM. The way it works is by allowing graphical entry of a rule, along with conditions and other complicating factors. The tool will then create an executable check.

From that rule, iDRM can then create test layouts that should pass and fail. This can be used to test the rules by running them against the test structures. They can also take existing layouts and check them against the rules, listing all violations.

The idea here is to skip the whole writing-a-War-and-Peace-tome thing and use this tool instead to codify the rules. You then get a one-to-one correspondence between each rule and a correct-by-construction executable version of the rule.

The guy using this can take advantage of the test structures and layout scans to confirm whether he or she defined the rule properly. If there are too many misses (false negatives), then you can see right then and there that you need to tighten up the rule. If you’re getting too many false positives, then you need to find some way to express the intent of the rule that’s not so restrictive.

The part that’s not done yet is taking those rules and translating them into specific code that can be run by third-party DRC engines like Calibre, IC Validator, or Assura. Sage calls that DRC Synthesis. That’s an obvious next step in providing a complete process-concept-to-runnable-DRC-deck flow. But the focus right now is on getting engineers in various parts of the silicon ecosystem started using iDRM to create the rules. There’s plenty of work to do at that stage. Which presumably gives Sage more breathing space to get the last bit done.

In case any of you are thinking that this is a foundry-only tool, that’s not necessarily the case. Yes, the initial process design kit (PDK) will be developed there, and they will probably constitute the bulk of the users. But fabless houses often supplement the basic PDK with rules of their own, so they would also be able to leverage iDRM to do that. Failure analysis folks may also want to play with it in order to resolve failures or yield issues that may turn out to be related to layout.

As for the rest of you, well, if it all works as promised, you just might sleep a bit better knowing that the deck you’re working with is clean. Or cleaner than usual, anyway. And if you want to go see how that deck got made, well, it might not seem quite as gross anymore.

 

More info:

Sage DA

One thought on “A Clean-By-Construction Deck”

Leave a Reply

featured blogs
Sep 17, 2021
Dear BoardSurfers, I want to unapologetically hijack the normal news and exciting feature information that you are accustomed to reading about in the world of PCB Design blogs to eagerly let you know... [[ Click on the title to access the full blog on the Cadence Community s...
Sep 16, 2021
I was quite happy with the static platform I'd created for my pseudo robot heads, and then some mad impetuous fool suggested servos. Oh no! Here we go again......
Sep 15, 2021
Learn how chiplets form the basis of multi-die HPC processor architectures, fueling modern HPC applications and scaling performance & power beyond Moore's Law. The post What's Driving the Demand for Chiplets? appeared first on From Silicon To Software....
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

ARC® Processor Virtual Summit 2021

Sponsored by Synopsys

Designing an embedded SoC? Attend the ARC Processor Virtual Summit on Sept 21-22 to get in-depth information from industry leaders on the latest ARC processor IP and related hardware and software technologies that enable you to achieve differentiation in your chip or system design.

Click to read more

featured paper

An Engineer's Guide to Designing with Precision Amplifiers

Sponsored by Texas Instruments

This e-book contains years of circuit design recommendations and insights from Texas Instruments industry experts and covers many common topics and questions you may encounter while designing with precision amplifiers.

Click to read more

featured chalk talk

PolarFire SoC FPGA Family

Sponsored by Mouser Electronics and Microchip

FPGA SoCs can solve numerous problems for IoT designers. Now, with the growing momentum behind RISC-V, there are FPGA SoCs that feature RISC-V cores as well as low-power, high-security, and high-reliability. In this episode of Chalk Talk, Amelia Dalton chats with KK from Microchip Technology about the new PolarFire SoC family that is ideal for demanding IoT endpoint applications.

Click here for more information about Microchip Technology PolarFire® SoC FPGA Icicle Kit