editor's blog
Subscribe Now

Cleaning Up the Verification Shop

It’s one thing if different tools from different divisions of the same company don’t talk seamlessly together. Generally considered poor form. While that used to be common, EDA folks have cleaned that up a lot over the years.

It’s generally better accepted when tools from one company don’t necessarily integrate well with tools from another company. If there are good strategic reasons, it will happen. If not, then, as a designer or EDA manager, you’re on your own for patching the tools together.

But what about when, as a company, you go on a multi-year shopping spree? Now tools that used to be made by different companies have magically transformed into tools from different – or even combined – divisions within the company. So what might have looked tolerable amongst multiple companies starts to look messy within a single company.

Of course, we know who our intrepid EDA shopper is: They of the Endlessly Open Purse, Synopsys. They recently announced that they are bringing their various verification technologies together under the unified moniker “Verification Compiler.” This unites, to a degree,

  • Static and formal analysis
  • Simulation
  • Coverage management/analysis
  • Verification IP
  • Debug

The nature of how this comes together seems to have a couple forms, and more is yet to come. To a certain extent, this is a packaging/licensing thing, where what used to be separate products can now be purchased and managed together as a bundle.

From an outside user’s view, however, you will still run the tools as you always did – this isn’t an integration into a seamless, consistent, unified GUI – although that’s the part that’s likely to come in the future. For now, use models will remain similar.

But it’s not only a marketing thing you can learn more if you read here. Underneath, these tools have had engines upgraded, and, in particular, they have been made to talk much more efficiently to each other using native integration rather than slower, less efficient (but more portable) approaches like PLI. The entire suite of tools can be scripted into a unified flow, rather than the current situation where each tool has a distinct flow.

The big win here thanks to these nuts-and-bolts improvements is performance. They post some pretty impressive gains – summarizing them as being 5 times faster (yielding 3 times the productivity). One formal project run by an unnamed customer ran 21 times faster. Capacity has also improved – in some cases by as much as 4 times.

One important message in the face of this inter-tool bonding: Verdi is remaining open. You may recall that one of the items in Synopsys’s shopping cart was SpringSoft, and the Verdi debug tool has a popular open interface and ecosystem. Even though they’re tightening their internal integration with Verdi, they’re not closing off access to outsiders.

In case you’re bringing out your checkbook right now, heads-up: unless you are amongst the anointed, you probably can’t get it yet. This is targeted for end-of-year broad availability; for now, it’s being wrung out by “limited customers.” I’ll leave it to you and Synopsys to decide whether you’re one of them.

And you can find out more about this in their release.

featured blogs
Dec 7, 2023
Building on the success of previous years, the 2024 edition of the DATE (Design, Automation and Test in Europe) conference will once again include the Young People Programme. The largest electronic design automation (EDA) conference in Europe, DATE will be held on 25-27 March...
Dec 7, 2023
Explore the different memory technologies at the heart of AI SoC memory architecture and learn about the advantages of SRAM, ReRAM, MRAM, and beyond.The post The Importance of Memory Architecture for AI SoCs appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Introduction to the i.MX 93 Applications Processor Family
Robust security, insured product longevity, and low power consumption are critical design considerations of edge computing applications. In this episode of Chalk Talk, Amelia Dalton chats with Srikanth Jagannathan from NXP about the benefits of the i.MX 93 application processor family from NXP can bring to your next edge computing application. They investigate the details of the edgelock secure enclave, the energy flex architecture and arm Cortex-A55 core of this solution, and how they can help you launch your next edge computing design.
Oct 23, 2023
5,504 views