editor's blog
Subscribe Now

Algorithms or Methodologies?

You see it two to four times a year from each EDA player: “x% Productivity Gains with y Tool!” Cadence recently had such an announcement with their Incisive tool; Synopsys has just had a similar story with FineSim.

As I was talking with the Cadence folks about this, I wondered: How much of this productivity gain comes as a result of engine/algorithm improvements, and how much as a result of methodology changes? The answer is, of course, that it comes from both.

But there’s a difference in when the benefits accrue. Engine improvements are immediately visible when you run the tool. Methodology changes: not so much. And there are actually two aspects to methodology.

The first is that, of course, a new methodology requires training and getting used to. So the first project done using a new methodology will take longer; the next one should be better because everyone is used to the new way of doing things. This is a reasonably well-known effect.

But there may be an extra delayed benefit: some methodology changes require new infrastructure or have a conversion cost. If, for example, you replace some aspect of simulation with a new formal tool, you have to modify your testbench and create the new test procedure from scratch. There may be, for instance, numerous pieces of IP that need to be changed to add assertions. These are largely one-time investments, with incremental work required on follow-on projects.

In this example, it may be that, even with the conversion work, things go faster even on the first project. But productivity will be even better next time, when much of the infrastructure and changes are ready and waiting.

As to the engines, I was talking to the folks at Mentor yesterday, and wondered whether improvements to the tools themselves become asymptotic: does there come a point when you just can’t go any faster? Their answer was, “No,” since there’s always some bottleneck that didn’t used to be an issue until the other bigger bottlenecks got fixed. The stuff that got ignored keeps bubbling up in priority, the upshot being that there’s always something that can be improved to speed up the tools.

Leave a Reply

featured blogs
Sep 27, 2020
https://youtu.be/EUDdGqdmTUU Made in "the Alps" Monday: Complete RF Solution: Think Outside the Chip Tuesday: The First Decade of RISC-V: A Worldwide Phenomenon Wednesday: The European... [[ Click on the title to access the full blog on the Cadence Community site. ...
Sep 25, 2020
What do you think about earphone-style electroencephalography sensors that would allow your boss to monitor your brainwaves and collect your brain data while you are at work?...
Sep 25, 2020
Weird weather is one the things making 2020 memorable. As I look my home office window (WFH – yet another 2020 “thing”!), it feels like mid-summer in late September. In some places like Key West or Palm Springs, that is normal. In Pennsylvania, it is not. My...
Sep 25, 2020
[From the last episode: We looked at different ways of accessing a single bit in a memory, including the use of multiplexors.] Today we'€™re going to look more specifically at memory cells '€“ these things we'€™ve been calling bit cells. We mentioned that there are many...

Featured Video

Four Ways to Improve Verification Performance and Throughput

Sponsored by Cadence Design Systems

Learn how to address your growing verification needs. Hear how Cadence Xcelium™ Logic Simulation improves your design’s performance and throughput: improving single-core engine performance, leveraging multi-core simulation, new features, and machine learning-optimized regression technology for up to 5X faster regressions.

Click here for more information about Xcelium Logic Simulation

Featured Paper

4 audio trends transforming the automotive industry

Sponsored by Texas Instruments

The automotive industry is focused on creating a comfortable driving experience – but without compromising fuel efficiency or manufacturing costs. The adoption of these new audio technologies in cars – while requiring major architecture changes – promise to bring a richer driving and in-car communication experience. Discover techniques using microphones, amplifiers, loudspeakers and advanced digital signal processing that help enable the newest trends in automotive audio applications.

Click here to download the whitepaper

Featured Chalk Talk

TensorFlow to RTL with High-Level Synthesis

Sponsored by Cadence Design Systems

Bridging the gap from the AI and data science world to the RTL and hardware design world can be challenging. High-level synthesis (HLS) can provide a mechanism to get from AI frameworks like TensorFlow into synthesizable RTL, enabling the development of high-performance inference architectures. In this episode of Chalk Talk, Amelia Dalton chats with Dave Apte of Cadence Design Systems about doing AI design with HLS.

More information