feature article
Subscribe Now

RTL Signoff

A New Step, Championed by Atrenta

The concept, “High risk, high reward” doesn’t hold with semiconductors. Heck, we have an entire industry built on the notion of risk reduction: EDA. EDA tools are enormously expensive to develop and are therefore expensive to acquire by the user.

There’s only one reason someone would spend that much money on tools: because it’s going to prevent even more gargantuan losses if something goes wrong with the chip.

OK, granted, much of what EDA is about is productivity – just try cutting a few billion transistors out of rubylith and see what that does for your carpal tunnels. So, really, it’s the 70% of EDA that’s about verification that we’re really talking about here.

Silicon used to be the ultimate verification. It still is with MEMS, for the most part. But for chips, we’ve managed to change the silicon step from “verification” to “validation.” All those tools do the verification part with the expectation that a thorough checkout of final silicon will confirm the verification outcomes.

But that means that, before the final commitment to masks, someone has to give approval that everything that can be checked and verified has been checked and verified. And that those items have either all passed or been waived. No stragglers, no stowaways, no surprises.

Hence the concept of “signoff.” The very term conjures up a “stuff just got real” image. You can play around all you want with tools, but if it’s going for signoff, and if your name is going to be on the record, then it’s serious. Early tools may estimate this or that, but at the end, in order to achieve “signoff” quality, estimation time is over: it’s time to be accurate, even though that means more processing time. Measure twice, cut once.

This concept of signoff has been around for a long time – simply because it occurs at the end of the EDA tool flow. You can change the EDA flow all you want – new tools, new algorithms, completely new paradigms – and it doesn’t affect signoff (conceptually) because signoff happens after all of that. Doesn’t matter how you got to the endpoint: once there, things better be right.

Of course, the productivity side of EDA tools comes partly from automation. Take something that a human could do but would take too long to do and would probably make mistakes along the way; have a machine do it instead. It gets done faster and with no (or fewer) errors. That allows abstraction, pushing human involvement further up the chain. For digital chips, the real design work doesn’t happen at the transistor level; it happens at the logic – or even IP – level.

Between logic and transistors is a series of tools that unpack the abstracted design and turn it into a real, physical design. This involves turning behaviors into logic and logic into gates and gates into transistors and wires. In theory, tools can do that perfectly. In practice, part of the verification job is about confirming that the tools worked correctly.

The better those tools work, the more tempting it might be to push “signoff” even further up the chain. Rather than signing off after the tools are all done, if we have enough faith in the final tools, we could sign off at some earlier stage, knowing that the trusty tools will carry forward our more abstracted instructions unerringly.

Of course, although this might get tossed out as a future goal someday, no one is suggesting we’re anywhere near being able to do this today. There’s too much time still being spent tweaking layout for timing, getting power levels down to where we want them, making sure signal integrity is good, confirming that the power grid is robust. You know; all of those real-world physical issues that tend to get lost in the convenience of abstraction.

But, in fact, all of this work that has to be done at the physical level has provided motivation for a slightly different signoff notion: by confirming that the abstracted design is solid, significant amounts of time can be saved from the backend efforts. If all the timing constraints are in place, if all the power intent has been properly described, if all of the requisite assertions have been specified, then the follow-on work can proceed much more quickly.

If not, then you’re going to end up cycling back to your RTL numerous times, filling in the blanks and searching out the missing information.

Hence the current push for “RTL Signoff.” While I’ve seen mentions of it on something of a limited scale by Real Intent, it’s Atrenta that appears to be the primary advocate. And, naturally, it requires things that Atrenta’s tools do well.

Verification steps can, to some extent, be divided into two main categories. The obvious one is, “Is the design doing what it’s supposed to do in the time it’s supposed to do it?” But when we’re still at the abstract RTL level, we can know if the logic works as expected, but not the true performance – we haven’t laid it out yet. We can estimate, but not with signoff accuracy. So this type of verification is hard to do at the RTL stage.

The second type of verification is more about making sure all the boxes are checked and that all required information is in place. Not only is this doable at the RTL level; it’s particularly useful there. All of the follow-on processing steps will rely on the instructions provided at the RTL level. Of course, the behavioral description is the obvious one. But more importantly, how fast does it have to be? What are the power requirements? Are there different voltages and sleep requirements for different blocks? Before we generate tests, do we know that the thing is testable?

All of this information informs everything else that follows, so if it’s correct and complete, you save yourself some time. If it’s incorrect or incomplete, you’re likely to come back for another round.

Atrenta’s tools fit this second verification category very well. And so, as they discuss the value of RTL signoff, it’s easy to get the generic signoff requirements confounded with Atrenta’s specific tools. But Atrenta will willingly allow that this concept isn’t tied to their tools. Even given their packaging all of this together into an RTL signoff tool package, a user could substitute someone else’s tool for, say, clock domain crossing checks, and it would be fine.

So, from a generic standpoint, what are those functions that should be held to account for RTL signoff? Here is Atrenta’s opinion about the critical items:

  • Functional coverage (including high-quality assertions). Yes, this is somewhat about whether the design works, but it’s more about proving that you’ve checked out the whole design, not just the obvious parts.
  • Routability (congestion; area and timing): this is an estimate, but if these checks indicate that there’s likely to be a routing issue, best solve it now rather that discovering later that the layout tools can’t converge (which, by definition, takes a long time).
  • Timing constraints (including false and multi-cycle paths). Critical for synthesis and layout tools.
  • Clock domain crossings: are there any lurking opportunities for metastability and lost data?
  • Power intent (CPF/UPF correctness): this is about making sure that you’ve specified everything that the follow-on tools will need to give you the power behaviors that you expect.
  • Power consumption (and meeting the power budget). This is about actual power performance to the extent that it can be estimated at this point.
  • Testability (stuck-at and at-speed coverage). Some tool is going to have to generate lots of tests; you want to be sure that it can reach all parts of your chip.

In the context of Atrenta specifically, these checks happen through a variety of tests embedded in their SpyGlass, GenSys, and BugScope products.

It is possible that someone else might come up with a slightly longer or shorter list than this. But if the concept of RTL signoff gets watered down to a marketing slogan that varies from company to company, meaning for each company only what’s convenient for them, then it loses value. So Atrenta is engaging in discussions with groups like Accellera and Si2 to see if the industry can come to agreement on a standard set of requirements for RTL signoff.

If that happens, then we’ll add a second critical barrier to our EDA flows. Yes, a barrier is not usually something that sounds like a good idea. But if, by erecting this new barrier, we can get more quickly and reliably to the ultimate barrier before tapeout, it might not be such a bad thing.

If or when this notion is taken up and standardized more formally, we’ll follow up here to see how it all shook out.

12 thoughts on “RTL Signoff”

  1. Bryon,
    Thanks for the recognition that Real Intent has been talking about RTL Sign-off. We saw this as important issue in 2012, and had a 1/2 day Tutorial at DVCon in San Jose, Feb. 2013 on it. You can see the description here: http://dvcon.org/sites/dvcon.org/files/files/dvcon_2013_final_program_web.pdf#36

    Certainly more can be done to make pre-synthesis verification and sign-off easier for design teams. Towards that goal, Real Intent has presented on the new paradigm of Static Verification. This was most recently done by a keynote by Pranav Ashar, CTO at FMCAD in Portland OR, in Oct. 2013. You can see a summary here: http://www.cs.utexas.edu/users/hunt/FMCAD/FMCAD13/invited-speakers.shtml#invited1

  2. Pingback: pax 3 burns lips
  3. Pingback: 123movies
  4. Pingback: netherland seedbox
  5. Pingback: Sang Kutu Bola
  6. Pingback: click to read more
  7. Pingback: coehuman Diyala

Leave a Reply

featured blogs
Jun 1, 2023
Cadence was a proud sponsor of the SEMINATEC 2023 conference, held at the University of Campinas in Brazil from March 29-31, 2023. This conference brings together industry representatives, academia, research and development centers, government organizations, and students to d...
Jun 1, 2023
In honor of Pride Month, members of our Synopsys PRIDE employee resource group (ERG) share thoughtful lessons on becoming an LGBTQIA+ ally and more. The post Pride Month 2023: Thoughtful Lessons from the Synopsys PRIDE ERG appeared first on New Horizons for Chip Design....
May 8, 2023
If you are planning on traveling to Turkey in the not-so-distant future, then I have a favor to ask....

featured video

Automatically Generate, Budget and Optimize UPF with Synopsys Verdi UPF Architect

Sponsored by Synopsys

Learn to translate a high-level power intent from CSV to a consumable UPF across a typical ASIC design flow using Verdi UPF Architect. Power Architect can focus on the efficiency of the Power Intent instead of worrying about Syntax & UPF Semantics.

Learn more about Synopsys’ Energy-Efficient SoCs Solutions

featured contest

Join the AI Generated Open-Source Silicon Design Challenge

Sponsored by Efabless

Get your AI-generated design manufactured ($9,750 value)! Enter the E-fabless open-source silicon design challenge. Use generative AI to create Verilog from natural language prompts, then implement your design using the Efabless chipIgnite platform - including an SoC template (Caravel) providing rapid chip-level integration, and an open-source RTL-to-GDS digital design flow (OpenLane). The winner gets their design manufactured by eFabless. Hurry, though - deadline is June 2!

Click here to enter!

featured chalk talk

How IO-Link® is Enabling Smart Factory Digitization -- Analog Devices and Mouser Electronics
Safety, flexibility and sustainability are cornerstone to today’s smart factories. In this episode of Chalk Talk, Amelia Dalton and Shasta Thomas from Analog Devices discuss how Analog Device’s IO-Link is helping usher in a new era of smart factory automation. They take a closer look at the benefits that IO-Link can bring to an industrial factory environment, the biggest issues facing IO-Link sensor and master designs and how Analog Devices ??can help you with your next industrial design.
Feb 2, 2023
16,146 views