editor's blog
Subscribe Now

AI for EDA: Rohit Sharma’s view from EDPS

 

Rohit Sharma is the founder and CEO of Pairpath, an EDA company with a clear mission:

“Enable customers squeeze every pico-second of performance and every milli-watt of power by efficiently providing sign-off accurate models.”

At the recent Electronic Process Design Symposium (EDPS) held at SEMI’s HQ in Milpitas, Sharma discussed the role(s) that AI might play in EDA. He started by noting that AI/ML research now consumes more than 1% of the world’s R&D budget. (Other EDPS speakers noted that the number of AI research papers has been growing exponentially, supplanting Moore’s Law for semiconductors with some other AI-research related law.)

Sharma said that the most likely use for AI in EDA was to add new features. In other words, he expects that the addition of AI to EDA will not be disruptive, but it definitely has a place. The most likely best fit for AI is in replacing algorithms that have not been successful, or not sufficiently successful.

The example Sharma gave was for cell classification—for example, characterizing a certain transistor layout as a full adder. Sharma said this is a common EDA problem and it’s an NP-complete problem. Although “NP” stands for “nondeterministic polynomial” and NP-complete are the hardest NP problems to solve, in my own mind I read “NP complete” as “not possible to complete.” At least not in any commercially practical amount of time.

It’s sort of like the dilemma that the newly reconstituted Spock faces in “Star Trek IV: The Voyage Home” (aka “Star Trek saves the whales.”) Here’s a dialog fragment from the movie to remind you:

 

Kirk: Mr. Spock, have you accounted for the variable mass of whales and water in your time re-entry program?

Spock: Mr. Scott cannot give me exact figures, Admiral, so… I will make a guess.

Kirk: A guess? You, Spock? That’s extraordinary.

 

NP-complete problems are like that. They have “high dimensionality” (Sharma’s words), so they’re hard to encode into a deterministic algorithm. AI inference used for pattern matching has no dilemma here. AI inferencing engines will happily serve up their best “guess.”

Sharma also listed the challenges associated with adding AI to EDA (generalizable to any AI use):

  1. A clear value proposition. (Just because you can use AI doesn’t mean that doing so is a good idea.)
  2. The AI use model for any specific application
  3. Data engineering. Be sure to look at the data set(s) before trying to apply ML.
  4. High dimensionality. (The Spock dilemma.)
  5. ML technology selection
  6. Integration of AI into legacy systems
  7. Acceptance of probabilistic results (will AI’s best “guess” suffice?)

In his concluding remarks, Sharma said that despite these challenges, he expects AI/ML will very likely alter the way EDA software is written.

 

One thought on “AI for EDA: Rohit Sharma’s view from EDPS”

  1. Let’s see now: EDA was born when “VERILOG CAN BE SIMULATED!!!!” became the driving force Verilog should(MUST) be used for design entry. Designers were reluctant, but as usual hype and buzzwords prevailed.

    No wonder “The example Sharma gave was for cell classification—for example, characterizing a certain transistor layout as a full adder. ” is a problem. They do not yet realize that a transistor layout is a Boolean thing, therefore it is a total mystery.

    There were countless full adders in technologies ranging from pulse gate, to nands, nors designed, built, and used before Design Automation. In fact, the origin of Design Automation (DA) was to wire PCBs for the IBM System360 in the early 1960’s.

    Automated Logic Diagrams were used for logic gates to show fan-in and fan-out for each gate.
    Starting at any gate it was possible to find the input logic conditions, the gate logic function, and the gates in the network that use the output.

    There were no simulators. so waveforms had to be hand drawn if they were needed.

    So now EDA still has trouble characterizing an adder? Carry save and carry lookahead adders were invented, eye-ball verified, pencil and paper simulated, build, used over 50 years ago.
    The key was Boolean Algebra — which EDA has nothing to do with, thankyou very much.

    I was one of the first users of ALDS. I designed, debugged, trouble shot, retrofitted from the smallest to the biggest. What? Without Verilog, VHDL, simulation, synthesis?

Leave a Reply

featured blogs
Oct 19, 2020
Have you ever wondered if there may another world hidden behind the facade of the one we know and love? If so, would you like to go there for a visit?...
Oct 19, 2020
Sometimes, you attend an event and it feels like you are present at the start of a new era that will change some aspect of the technology industry. Of course, things don't change overnight. One... [[ Click on the title to access the full blog on the Cadence Community si...
Oct 16, 2020
Another event popular in the tech event circuit is PCI-SIG® DevCon. While DevCon events are usually in-person around the globe, this year, like so many others events, PCI-SIG DevCon is going virtual. PCI-SIG DevCons are members-driven events that provide an opportunity to le...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

Featured Video

Four Ways to Improve Verification Performance and Throughput

Sponsored by Cadence Design Systems

Learn how to address your growing verification needs. Hear how Cadence Xcelium™ Logic Simulation improves your design’s performance and throughput: improving single-core engine performance, leveraging multi-core simulation, new features, and machine learning-optimized regression technology for up to 5X faster regressions.

Click here for more information about Xcelium Logic Simulation

Featured Paper

The Cryptography Handbook

Sponsored by Maxim Integrated

The Cryptography Handbook is designed to be a quick study guide for a product development engineer, taking an engineering rather than theoretical approach. In this series, we start with a general overview and then define the characteristics of a secure cryptographic system. We then describe various cryptographic concepts and provide an implementation-centric explanation of physically unclonable function (PUF) technology. We hope that this approach will give the busy engineer a quick understanding of the basic concepts of cryptography and provide a relatively fast way to integrate security in his/her design.

Click here to download the whitepaper

Featured Chalk Talk

Mom, I Have a Digital Twin? Now You Tell Me?

Sponsored by Cadence Design Systems

Today, one engineer’s “system” is another engineer’s “component.” The complexity of system-level design has skyrocketed with the new wave of intelligent systems. In this world, optimizing electronic system designs requires digital twins, shifting left, virtual platforms, and emulation to sort everything out. In this episode of Chalk Talk, Amelia Dalton chats with Frank Schirrmeister of Cadence Design Systems about system-level optimization.

Click here for more information