editor's blog
Subscribe Now

Mentor Unifies Verification

Seems like verification unification is in the air. We saw it recently with Synopsys, and now we have a move from Mentor.

While Synopsys’ version looked like an effort to unify acquired technology, Mentor’s efforts seem more internal. The big picture involves the unification of simulation, formal, emulation, and virtual prototyping under one umbrella, one interface. In that scheme, Mentor presents each of the technologies as an engine serving the higher-level verification goal; no longer is each one of these things a separate tool.

But a big part of what’s happening here is about conjoining emulation and simulation more seamlessly – trying to unify them to a greater extent. We actually looked at Cadence’s version of this some time back, when the verification environment was even more fractured and confusing. But the high-level picture of this relates to making the distinction between simulation and emulation more transparent.

In concept, emulation should just be a faster simulation engine, and you should be able to push the pieces of your design around between the simulator or the emulator – or the virtual prototype – based on what needs to be verified in the greatest detail and how many cycles are required either to run the tests themselves or to support the tests (like getting past the boot-up sequence quickly).

In practice, of course, emulators are hardware, and so only so much of the testbench can move from a virtual environment into real hardware – the so-called synthesizable subset. That requires care on the part of verification engineers to support a flexible testbench, and it’s also the point of new verification IP (VIP) that Mentor has included as a part of this announcement – VIP that transitions more easily between simulation and emulation.

So a big part of what’s new here is in Mentor’s Veloce emulator: their new OS3. And there are several new Veloce capabilities that are important for supporting this unification:

  • It supports a more simulation-like interaction.
  • Assertions can now be synthesized.
  • It now tracks coverage.
  • It supports Mentor’s push to move emulators out of the lab and into the data center for more effective sharing and better machine utilization, including multi-tenanting on a single machine.

Two supporting tools help with this. One is VirtuaLAB, which, somewhat surprisingly, was presented as new, but which we actually saw almost exactly two years ago. This is about eliminating rate matchers when generating “real-world” stimulus for verification. The VirtuaLAB boxes can also go into the data center as general stimulus generators, eliminating the need for someone to be physically present in a lab connecting wires to get data.

The other supporting tool is CodeLink, which we saw quite some time ago. While it’s supported offline simulation debug all the while, it now supports offline emulation debugging and review as well.

There’s actually a subtle consideration here for designs underway on existing Veloce machines. In theory, if you migrate your emulators into a data center and start sharing them on the new OS version, it’s likely this will happen in the middle of some design (it’s impossible to imagine a big company where all projects magically finish at the same time, creating an opportune window for change). The thing is, making changes in the middle of a design project is generally not great for schedule confidence. But Mentor assures us of full backwards compatibility, so that verification plans being executed under older expectations should work just as if nothing had changed.

Meanwhile, they’ve announced a new unified debugger called Visualizer that supports all of the engines, removing the need to move between debuggers when moving between engines.

And, in another trend, simulation results are all stored in a single database, regardless of which engine generated them.

This whole unification movement reflects what’s happening on chips themselves: SoCs now integrate pieces that, in earlier times, would have been created, verified, and debugged separately. And with smaller chunks, you could use separate tools for separate parts of the verification plan. But that’s just not feasible now that every aspect of every circuit has to be known to work properly before cutting an outrageously expensive mask set.

You can get more info in their announcement.

Leave a Reply

featured blogs
Sep 23, 2020
The great canning lid shortage of 75, the great storm of 87, the great snow of 54, the great freeze of 48... will we one day be talking about the great toilet roll shortage of 2020?...
Sep 23, 2020
CadenceLIVE 2020 India, our first digital conference held on 9-10 September and what an event it was! With 75 technical paper presentations, four keynotes, a virtual exhibition area, and fun... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Sep 22, 2020
I am a child of the 80s.  I grew up when the idea of home computing was very new.  My first experience of any kind of computer was an Apple II that my Dad brought home from work. It was the only computer his company possessed, and every few weeks he would need to cr...
Sep 18, 2020
[From the last episode: We put the various pieces of a memory together to show the whole thing.] Before we finally turn our memory discussion into an AI discussion, let'€™s take on one annoying little detail that I'€™ve referred to a few times, but have kept putting off. ...

Featured Video

Four Ways to Improve Verification Performance and Throughput

Sponsored by Cadence Design Systems

Learn how to address your growing verification needs. Hear how Cadence Xcelium™ Logic Simulation improves your design’s performance and throughput: improving single-core engine performance, leveraging multi-core simulation, new features, and machine learning-optimized regression technology for up to 5X faster regressions.

Click here for more information about Xcelium Logic Simulation

Featured Paper

An Introduction to Automotive LIDAR

Sponsored by Texas Instruments

This white paper is an introduction to industrial and automotive time-of-flight (ToF) light detection and ranging (LIDAR) solutions to serve next-generation autonomous systems.

Click here to download the whitepaper

Featured Chalk Talk

SLX FPGA: Accelerate the Journey from C/C++ to FPGA

Sponsored by Silexica

High-level synthesis (HLS) brings incredible power to FPGA design. But harnessing the full power of HLS with FPGAs can be difficult even for the most experienced engineering teams. In this episode of Chalk Talk, Amelia Dalton chats with Jordon Inkeles of Silexica about using the SLX FPGA tool to truly harness the power of HLS with FPGAs, getting better results faster - regardless of whether you are approaching from the hardware or software domain.

More information about SLX FPGA