editor's blog
Subscribe Now

Faster Simulation on GPUs

At last week’s SNUG, I had a chat with Uri Tal, CEO of startup Rocketick, about their simulation acceleration technology. What they do bears some resemblance to the parallelization semi-automation done by Vector Fabrics or the exploration done by CriticalBlue, except that here it’s working with Verilog instead of C and it’s fully automated and transparent to the user. He claims they can accelerate simulation by over 10X.

They use a GPU to achieve this kind of parallelization. This has promise both for in-house simulation farms and cloud-based simulation, where GPUs are available (although the cloud hasn’t been their focus).

What they do is create a directed flow graph (DFG) from the Verilog code and then go through and figure out which parts they can accelerate. Each such part becomes its own thread for the GPU. The acceleratable parts tend to be the synthesizable portions of the code (as hardware logic tends to be highly parallel). They do this on a statement-by-statement basis while keeping an eye on the dependencies – if there are too many dependencies, they may change the partition to reduce the size of the dependency cutset. What is left unaccelerated either couldn’t be accelerated or simply didn’t make sense to accelerate.

So, based on this, the tool converts a completely unaccelerated simulation into portions that are set aside for the GPU and the remaining bits that are re-generated for standard simulation. The accelerated portion is attached to the simulation using PLI.

The accelerated threads are turned into a byte code that is executed by a run-time engine. This makes the accelerated “code” portable onto any platform; only the runtime engine must be ported. They also manage memory carefully: the GPU uses very wide-word memory, so random byte accesses can be very inefficient; they manage the memory on a per-thread basis to get as much as possible out of each memory read (or write).

The accelerated threads dump all the usual files for later analysis by viewers and debuggers. They interface directly with SpringSoft’s Siloti to identify “essential” signals.

You can find more on their website.

Leave a Reply

featured blogs
Mar 21, 2023
We explain computational lithography and explore how our partnership with NVIDIA accelerates semiconductor scaling and the chip design flow in the AI age. The post How Synopsys and NVIDIA Are Accelerating Semiconductor Scaling in the AI Age appeared first on New Horizons for...
Mar 20, 2023
Electronic design has evolved over the years to provide methods for optimizing power, space, and energy needs for the most demanding market applications in areas including hyperscale computing, consumer, 5G communications, automotive, mobile, aerospace, industrial, and health...
Mar 10, 2023
A proven guide to enable project managers to successfully take over ongoing projects and get the work done!...

featured video

First CXL 2.0 IP Interoperability Demo with Compliance Tests

Sponsored by Synopsys

In this video, Sr. R&D Engineer Rehan Iqbal, will guide you through Synopsys CXL IP passing compliance tests and demonstrating our seamless interoperability with Teladyne LeCroy Z516 Exerciser. This first-of-its-kind interoperability demo is a testament to Synopsys' commitment to delivering reliable IP solutions.

Learn more about Synopsys CXL here

featured chalk talk

Enabling the Flow of Data in the World of IoT
At the heart of our growing IoT ecosystem are high performance semiconductors, but integrated circuits alone cannot make a successful IoT system. In this episode of Chalk Talk, Amelia Dalton chats with Peter Blais from KEMET and Ryan Wenzelman from Pulse about how passive components are crucial to the development of successful IoT frameworks. They take a closer look at RF, wired and power distribution aspects of IoT system development and investigate how YAGEO Group is advancing innovation in the world of IoT with a wide selection of passive components.
May 18, 2022
37,541 views