editor's blog
Subscribe Now

Cadence’s Faster Debug Idea

Cadence is proposing a new way to approach debug. It’s almost an obvious way, except that this isn’t how most debug has traditionally been done. The real reason this hasn’t been done before is simple: data. We’ll come back to that in a sec.

Their point is that, for most debug today, you have to anticipate where problems are likely to crop up and then manually instrument your code with “printf” statements (or the equivalent) so that you get some visibility into what’s going on with your program.

That works OK for your first simulation run – up to the point when something goes wrong without an accompanying printf to provide clues. So you go back and add more printfs and – and this is the key – you resimulate.

By Cadence’s estimation, 50% of verification effort is debugging, and 25% is running tests. Together, they’re ¾ of the pie. Each resimulation is more test time, and because the debug effort resembles successive approximation as you try to zero in on the cause, it’s less efficient. Their big idea is to make debug more directed and – this is the big part – make it 100% doable after only one verification run.

The result is Indago (no, it doesn’t sound like “indigo”; it’s “in-DAH-go,” apparently Latin for hunting or tracking). There are a few key pieces to this approach.

The main one is the fact that all artifacts – data, logs, code execution, etc. – are captured. In other words, instead of having to decide ahead which data to expose via printf, you simply get everything. That means that debug efforts have all the data they need – no subsequent runs to capture new data are needed.

Indago_drawing.png
 

From there, they have what they call “root cause analysis” that helps point you in the direction of a bug. When a signal is identified by the testbench as being incorrect, the tool can identify a short list of possible causes, and you can drill in from there (even crossing into third-party IP as long as it’s not encrypted).

Finally, they have three apps that they layer above this fundamental technology. One is their Debug Analyzer, which allows multi-language (SystemVerilog, e, and SystemC) code debug. The second is Embedded Software Debug, which helps debug co-verified software and hardware (and optimized for their Palladium emulator and Incisive simulator). Finally, Protocol Debug provides abstraction when debugging protocols so that you can observe what’s happening at a higher level.

These three apps can be run together at the same time. To some extent, they provide alternative views of the same information, and they stay synchronized. You can move back and forth between them, say, highlighting something in one and then viewing in another.

Indago isn’t tied to Cadence’s verification tools; it can also be used with other engines mixed and matched from different EDA providers.

Finally, a quick word on a buzzphrase that featured prominently in the announcement: Big Data. When you hear that, you might think Hadoop or Lambda Architecture or datamarts or NoSQL searches or any number of mysterious acronyms and algorithms and incantations. Anything up to the point of Deep Learning, which is yet another buzzphrase.

I tried to drill in to see what “Big Data” meant in this context. And, in fact, it’s mostly none of that prior stuff. It’s “big data” in the most general sense, the highest-level big-data concept. And that is, “Grab everything you can, up to and including your mother-in-law, and stash it away cuz you might need it someday.” Indago embraces that aspect – it’s key to eliminating subsequent verification iterations while debugging.

To my earlier point, it’s only in modern times that memory is cheap and big enough (and we can dump data to it fast enough) to where we can afford to be this “wasteful” – after all, an enormous percentage of that stored data will never, ever be used. Unlike in the past, that’s no longer an unacceptable cost. Accelerating debug is worth more than the extra storage.

Leave a Reply

featured blogs
Jul 25, 2021
https://youtu.be/cwT7KL4iShY Made on "a tropical beach" Monday: Aerospace and Defense Systems Day...and DAU Tuesday: 75 Years of the Microprocessor Wednesday: CadenceLIVE Cloud Panel... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Jul 24, 2021
Many modern humans have 2% Neanderthal DNA in our genomes. The combination of these DNA snippets is like having the ghost of a Neanderthal in our midst....
Jul 23, 2021
Synopsys co-CEO Aart de Geus explains how AI has become an important chip design tool as semiconductor companies continue to innovate in the SysMoore Era. The post Entering the SysMoore Era: Synopsys Co-CEO Aart de Geus on the Need for AI-Designed Chips appeared first on Fro...
Jul 9, 2021
Do you have questions about using the Linux OS with FPGAs? Intel is holding another 'Ask an Expert' session and the topic is 'Using Linux with Intel® SoC FPGAs.' Come and ask our experts about the various Linux OS options available to use with the integrated Arm Cortex proc...

featured video

Intelligent fall detection using TI mmWave radar sensors

Sponsored by Texas Instruments

Actively sense when a fall has occurred and take action such as sending out an alert in response. Our 60GHz antenna-on-package radar sensor (IWR6843AOP) is ideal for fall detection applications since it’s able to detect falls in large, indoor environments, can distinguish between a person sitting and falling, and utilizes a point cloud vs a person’s identifiable features, which allows the sensor to be used in areas where privacy is vital such as bathrooms and bedrooms.

Click here to explore the AOP evaluation module

featured paper

PrimeLib Next-Gen Library Characterization - Providing Accelerated Access to Advanced Process Nodes

Sponsored by Synopsys

What’s driving the need for a best-in-class solution for library characterization? In the latest Synopsys Designer’s Digest, learn about various SoC design challenges, requirements, and innovative technologies that deliver faster time-to-market with golden signoff quality. Learn how Synopsys’ PrimeLib™ solution addresses the increase in complexity and accuracy needs for advanced nodes and provides designers and foundries accelerated turn-around time and compute resource optimization.

Click to read the latest issue of Designer's Digest

featured chalk talk

Silicon Lifecycle Management (SLM)

Sponsored by Synopsys

Wouldn’t it be great if we could keep on analyzing our IC designs once they are in the field? After all, simulation and lab measurements can never tell the whole story of how devices will behave in real-world use. In this episode of Chalk Talk, Amelia Dalton chats with Randy Fish of Synopsys about gaining better insight into IC designs through the use of embedded monitors and sensors, and how we can enable a range of new optimizations throughout the lifecycle of our designs.

Click here for more information about Silicon Lifecycle Management Platform