editor's blog
Subscribe Now

Mentor Unifies Verification

Seems like verification unification is in the air. We saw it recently with Synopsys, and now we have a move from Mentor.

While Synopsys’ version looked like an effort to unify acquired technology, Mentor’s efforts seem more internal. The big picture involves the unification of simulation, formal, emulation, and virtual prototyping under one umbrella, one interface. In that scheme, Mentor presents each of the technologies as an engine serving the higher-level verification goal; no longer is each one of these things a separate tool.

But a big part of what’s happening here is about conjoining emulation and simulation more seamlessly – trying to unify them to a greater extent. We actually looked at Cadence’s version of this some time back, when the verification environment was even more fractured and confusing. But the high-level picture of this relates to making the distinction between simulation and emulation more transparent.

In concept, emulation should just be a faster simulation engine, and you should be able to push the pieces of your design around between the simulator or the emulator – or the virtual prototype – based on what needs to be verified in the greatest detail and how many cycles are required either to run the tests themselves or to support the tests (like getting past the boot-up sequence quickly).

In practice, of course, emulators are hardware, and so only so much of the testbench can move from a virtual environment into real hardware – the so-called synthesizable subset. That requires care on the part of verification engineers to support a flexible testbench, and it’s also the point of new verification IP (VIP) that Mentor has included as a part of this announcement – VIP that transitions more easily between simulation and emulation.

So a big part of what’s new here is in Mentor’s Veloce emulator: their new OS3. And there are several new Veloce capabilities that are important for supporting this unification:

  • It supports a more simulation-like interaction.
  • Assertions can now be synthesized.
  • It now tracks coverage.
  • It supports Mentor’s push to move emulators out of the lab and into the data center for more effective sharing and better machine utilization, including multi-tenanting on a single machine.

Two supporting tools help with this. One is VirtuaLAB, which, somewhat surprisingly, was presented as new, but which we actually saw almost exactly two years ago. This is about eliminating rate matchers when generating “real-world” stimulus for verification. The VirtuaLAB boxes can also go into the data center as general stimulus generators, eliminating the need for someone to be physically present in a lab connecting wires to get data.

The other supporting tool is CodeLink, which we saw quite some time ago. While it’s supported offline simulation debug all the while, it now supports offline emulation debugging and review as well.

There’s actually a subtle consideration here for designs underway on existing Veloce machines. In theory, if you migrate your emulators into a data center and start sharing them on the new OS version, it’s likely this will happen in the middle of some design (it’s impossible to imagine a big company where all projects magically finish at the same time, creating an opportune window for change). The thing is, making changes in the middle of a design project is generally not great for schedule confidence. But Mentor assures us of full backwards compatibility, so that verification plans being executed under older expectations should work just as if nothing had changed.

Meanwhile, they’ve announced a new unified debugger called Visualizer that supports all of the engines, removing the need to move between debuggers when moving between engines.

And, in another trend, simulation results are all stored in a single database, regardless of which engine generated them.

This whole unification movement reflects what’s happening on chips themselves: SoCs now integrate pieces that, in earlier times, would have been created, verified, and debugged separately. And with smaller chunks, you could use separate tools for separate parts of the verification plan. But that’s just not feasible now that every aspect of every circuit has to be known to work properly before cutting an outrageously expensive mask set.

You can get more info in their announcement.

Leave a Reply

featured blogs
Dec 1, 2020
If you'€™d asked me at the beginning of 2020 as to the chances of my replicating an 1820 Welsh dresser, I would have said '€œzero,'€ which just goes to show how little I know....
Dec 1, 2020
More package designers these days, with the increasing component counts and more complicated electrical constraints, are shifting to using a front-end schematic capture tool. As with IC and PCB... [[ Click on the title to access the full blog on the Cadence Community site. ]...
Dec 1, 2020
UCLA’s Maxx Tepper gives us a brief overview of the Ocean High-Throughput processor to be used in the upgrade of the real-time event selection system of the CMS experiment at the CERN LHC (Large Hadron Collider). The board incorporates Samtec FireFly'„¢ optical cable ...
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...

featured video

Accelerate Automotive Certification with Synopsys Functional Safety Test Solution

Sponsored by Synopsys

With the Synopsys Functional Safety Test Solution architecture, designers of automotive SoCs can integrate an automated, end-to-end BIST solution to accelerate ISO compliance and time-to-market.

Click here for more information about Embedded Test & Repair

featured paper

Simplify your isolated current & voltage sensing designs

Sponsored by Texas Instruments

Learn how the latest isolated amplifiers and isolated ADCs can operate with a single supply on the low side, and why this offers substantial benefits over traditional solutions.

Click here to download the whitepaper

Featured Chalk Talk

SLX FPGA: Accelerate the Journey from C/C++ to FPGA

Sponsored by Silexica

High-level synthesis (HLS) brings incredible power to FPGA design. But harnessing the full power of HLS with FPGAs can be difficult even for the most experienced engineering teams. In this episode of Chalk Talk, Amelia Dalton chats with Jordon Inkeles of Silexica about using the SLX FPGA tool to truly harness the power of HLS with FPGAs, getting better results faster - regardless of whether you are approaching from the hardware or software domain.

More information about SLX FPGA