editor's blog
Subscribe Now

Algorithms or Methodologies?

You see it two to four times a year from each EDA player: “x% Productivity Gains with y Tool!” Cadence recently had such an announcement with their Incisive tool; Synopsys has just had a similar story with FineSim.

As I was talking with the Cadence folks about this, I wondered: How much of this productivity gain comes as a result of engine/algorithm improvements, and how much as a result of methodology changes? The answer is, of course, that it comes from both.

But there’s a difference in when the benefits accrue. Engine improvements are immediately visible when you run the tool. Methodology changes: not so much. And there are actually two aspects to methodology.

The first is that, of course, a new methodology requires training and getting used to. So the first project done using a new methodology will take longer; the next one should be better because everyone is used to the new way of doing things. This is a reasonably well-known effect.

But there may be an extra delayed benefit: some methodology changes require new infrastructure or have a conversion cost. If, for example, you replace some aspect of simulation with a new formal tool, you have to modify your testbench and create the new test procedure from scratch. There may be, for instance, numerous pieces of IP that need to be changed to add assertions. These are largely one-time investments, with incremental work required on follow-on projects.

In this example, it may be that, even with the conversion work, things go faster even on the first project. But productivity will be even better next time, when much of the infrastructure and changes are ready and waiting.

As to the engines, I was talking to the folks at Mentor yesterday, and wondered whether improvements to the tools themselves become asymptotic: does there come a point when you just can’t go any faster? Their answer was, “No,” since there’s always some bottleneck that didn’t used to be an issue until the other bigger bottlenecks got fixed. The stuff that got ignored keeps bubbling up in priority, the upshot being that there’s always something that can be improved to speed up the tools.

Leave a Reply

featured blogs
Mar 27, 2024
The current state of PCB design is in the middle of a trifecta; there's an evolution, a revolution, and an exodus. There are better tools and material changes, there's the addition of artificial intelligence and machine learning (AI/ML), but at the same time, people are leavi...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

PolarFire® SoC FPGAs: Integrate Linux® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
7,075 views