feature article
Subscribe Now

Walking the (Heterogeneous Multiprocessing) Talk

When Will EDA Tools be Accelerated?

For years now, marketing folks at companies who make things like GPUs and FPGAs have been painting a beautiful picture of a gleaming future – a future where dogs and cats get along, unicorns frolic on the lawn, and accelerated computing brings orders of magnitude improvements in computational throughput and, particularly, performance-per-watt. It’s a grand story, and the only thing that’s kept it from becoming reality is the minor challenge of finding hardware engineers to re-write all of the software in the world in RTL (or something like that).

Of course, that vision has not yet become reality, although it is (we are assured) just around the corner, owing to new, improved tools that make programming heterogeneous multiprocessing systems a breeze. Well, maybe not a “breeze” exactly, but pretty darn simple. Here – watch this demo – even our marketing VP can take these fifty lines of C code and whip them into an optimized, accelerated fury in just minutes, with just a couple of mouse clicks.

OK, then, FPGA companies. Let’s see you do this on your own EDA tools. You know – the ones that take hours and often days to complete a single run through simulation, synthesis, place-and-route, and whatever other incantations we need to perform in order to get our “code” to become working hardware. When will that software be included in this grand vision for the future of computing?

Crickets…

Yeah, don’t hold your breath for that to happen anytime soon.

This brings up the perfectly reasonable question: “Why not?” It seems like EDA software should be the poster child for acceleration, right? Critical, complex software applications that are a huge bottleneck for high-value engineering teams. Massive operations on enormous data models. Doesn’t this seem like exactly the kind of thing that heterogeneous computing with FPGAs or GPUs is supposed to solve?

It turns out, however, that the obstacles to accelerating EDA tools are substantial and diverse. First, there are economic challenges. Even though the marketing VP can use an automated tool to convert a few dozen lines of code into optimized hardware/software goo, that process doesn’t scale for the hundreds of millions of lines of code involved in the EDA ecosystem. And a million marketing VPs standing at a million demo stations clicking a million mouse buttons will never produce a hardware implementation of RTL synthesis.

Porting something as large as an EDA tool suite (even just the extremely performance-critical bits) would be a monumental and very expensive effort. Given the comparatively small revenue behind EDA, it would be difficult to make an economic case for the effort. Would tool companies immediately get substantially more revenue? Unlikely. Sure, it would be a boon for users, but it wouldn’t drive big incremental sales and profits to EDA’s bottom line. In fact, it would probably be a giant expense with very little return.

On top of that, users of EDA software (Yep, that’s all of us – people who are reading this), don’t really want to replace our current computing infrastructure with new, proprietary boxes jammed full of expensive FPGAs or GPUs along with the current Xeons. We’d like to be able to run our software on our plain-old generic hardware, thank you very much. Would it make economic sense for a company to buy proprietary hardware to run EDA tools? Maybe, but it would probably not make economic sense for anyone to build that hardware.

Of course, EDA companies (and we are also including FPGA companies in this category, by the way) could theoretically provide accelerated tool solutions in the cloud, running on their own special super-accelerated servers. OK, who wants to do their proprietary engineering work in the cloud? Hello? Anyone? Still with us? Yeah, that’s what we thought.

Beyond these economic and logistical issues, there are serious technical barriers to accelerating much EDA code. Today’s EDA tools typically operate on enormous in-memory data models. Performance is often limited less by how much processing can be parallelized and more by concurrent access to data models. Yes, EDA companies have at least worked in recent years to make it possible for multiple (conventional) servers to work some problems in parallel, but those implementations quickly run into diminishing returns as the number of servers is scaled. Similar obstacles stand in the way of porting code to run on hardware-accelerated architectures. It isn’t just the instruction-crunching speed that’s the challenge.

Another deep, dark secret that makes EDA software difficult to accelerate is the unfortunate reality that there is a large amount of code in today’s EDA tools that, (sit down before you read this) nobody understands. Yep. A typical EDA tool today is made up in part of software that was written a long time ago in a startup or university far, far away. The engineer or grad student who labored for years to get the innermost loops of those critical routines optimized for speed and functionality has long since retired, gone to work for Facebook or Google, or otherwise moved on to greener pastures. The current engineers at EDA companies treat these areas of code as “black boxes,” generally fearing to crack open the hood for fear of disrupting some subtle, incomprehensible bit of black magic that makes the whole thing tick. Often, these would be the very routines that require re-implementation for acceleration. Caveat coder.

EDA isn’t a stranger to acceleration, however. Today, for example, EDA companies sell emulators that accelerate the verification process orders of magnitude beyond what’s possible with RTL simulation. But they accomplish this by basically doing away with the “simulator” entirely and implementing the RTL directly on FPGA-like native hardware. And, in the old days, several EDA companies sold specialized acceleration machines, which were really just glorified souped-up regular workstations, specifically tuned for accelerating their (primarily simulation) tools. These ultimately failed, however, as it was too much work for the EDA companies do develop custom computers at a pace that kept up with the rate of progress in conventional, general-purpose computing hardware. EDA Accelerators were obsolete almost as soon as they were released.

That all being said, it’s still possible that we will see design tools take advantage of the latest and greatest hardware acceleration technology someday. And, when it does, that may be the sign that acceleration technology is truly becoming useful to the mainstream, rather than to a few high-budget, massively-scaled “killer apps” required by the “Super Seven” server companies. It will be interesting to watch.

Leave a Reply

featured blogs
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 23, 2024
We explore Aerospace and Government (A&G) chip design and explain how Silicon Lifecycle Management (SLM) ensures semiconductor reliability for A&G applications.The post SLM Solutions for Mission-Critical Aerospace and Government Chip Designs appeared first on Chip ...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Embedded Storage in Green IoT Applications
Sponsored by Mouser Electronics and Swissbit
In this episode of Chalk Talk, Amelia Dalton and Martin Schreiber from Swissbit explore the unique set of memory requirements that Green IoT designs demand, the roles that endurance, performance and density play in flash memory solutions, and how Swissbit’s SD cards and eMMC technologies can add value to your next IoT design.
Oct 25, 2023
23,689 views