editor's blog
Subscribe Now

Faster Simulation on GPUs

At last week’s SNUG, I had a chat with Uri Tal, CEO of startup Rocketick, about their simulation acceleration technology. What they do bears some resemblance to the parallelization semi-automation done by Vector Fabrics or the exploration done by CriticalBlue, except that here it’s working with Verilog instead of C and it’s fully automated and transparent to the user. He claims they can accelerate simulation by over 10X.

They use a GPU to achieve this kind of parallelization. This has promise both for in-house simulation farms and cloud-based simulation, where GPUs are available (although the cloud hasn’t been their focus).

What they do is create a directed flow graph (DFG) from the Verilog code and then go through and figure out which parts they can accelerate. Each such part becomes its own thread for the GPU. The acceleratable parts tend to be the synthesizable portions of the code (as hardware logic tends to be highly parallel). They do this on a statement-by-statement basis while keeping an eye on the dependencies – if there are too many dependencies, they may change the partition to reduce the size of the dependency cutset. What is left unaccelerated either couldn’t be accelerated or simply didn’t make sense to accelerate.

So, based on this, the tool converts a completely unaccelerated simulation into portions that are set aside for the GPU and the remaining bits that are re-generated for standard simulation. The accelerated portion is attached to the simulation using PLI.

The accelerated threads are turned into a byte code that is executed by a run-time engine. This makes the accelerated “code” portable onto any platform; only the runtime engine must be ported. They also manage memory carefully: the GPU uses very wide-word memory, so random byte accesses can be very inefficient; they manage the memory on a per-thread basis to get as much as possible out of each memory read (or write).

The accelerated threads dump all the usual files for later analysis by viewers and debuggers. They interface directly with SpringSoft’s Siloti to identify “essential” signals.

You can find more on their website.

Leave a Reply

featured blogs
Sep 24, 2018
One of the biggest events in the FPGA/SoC ecosystem is the annual Xilinx Developers Forum (XDF). XDF connects software developers and system designers to the deep expertise of Xilinx engineers, partners, and industry leaders. XDF takes place in three locations this year.  Sa...
Sep 24, 2018
For the second year, the Electronic Design Process Symposium (EDPS) took place in Milpitas, having been at Monterey for many years. This was apparently the 25th year EDPS has run. I find EDPS to be a fascinating conference, and I think it is a shame that more people don'...
Sep 21, 2018
  FPGA luminary David Laws has just published a well-researched blog on the Computer History Museum'€™s Web site titled '€œWho invented the Microprocessor?'€ If you'€™re wildly waving your raised hand right now, going '€œOoo, Ooo, Ooo, Call on me!'€ to get ...
Sep 20, 2018
Last week, NVIDIA announced the release of the Jetson Xavier developer kit. The Jetson Xavier, which was developed in OrCAD, is designed to help developers prototype with robots, drones, and other......