feature article
Subscribe Now

Accelerating ASIC Verification with FPGA Verification Components

With the ever-increasing size and density of ASIC, conventional simulation-based verification has become a bottleneck in the project development cycle. In conventional verification, the simulation time steadily increases as the design matures in terms of bug count.

The verification community has resorted to different methodologies to overcome this. They are trying to reduce the development time by introducing Verification Components and Hardware Verification Languages (HVL). These help in terms of reusability but do not attend to the issue of simulation time. On one side, where the HVL provides better features such as higher level of abstraction and better randomization, the normal simulation time increases significantly. The additional random generation logic and higher level of abstraction, along with PLI calls, reduces the simulation speed. As shown in Figure 1, as the design matures, it becomes tougher to find bugs, resulting in longer simulation time. Now randomization has increased to find more bugs using HVL, which increases the simulation time and overall development time.

Hardware-acceleration-based verification methodology can be used to solve the verification time issue. With hardware acceleration, the DUT is mapped inside an FPGA which helps reduce the simulation time, since the actual hardware runs without any overhead on simulators. But with the size and complexity of today’s chips, hardware acceleration is limited in terms of reducing the simulation time.

Figure 1. Conventional Simulation Time

The performance-limiting factor here is interaction between the Testbench and the Design on FPGA. The time spent in Testbench during simulation is the main cause of low performance. The advantage of putting the Design on an FPGA can be extended to the Testbench by porting the Bus Interface part of verification components on FPGA. This is where the new era of “Synthesizable Verification Components” begins. This approach can easily multiply the simulation speed and reduce the design cycle to a great extent.

Consider an example of verification for a PCI Express Interface Design with standard PIPE interface. The following figure shows a typical verification environment for verifying a PCI Express-based DUT with PIPE interface.

Figure 2. Conventional Verification Environment

The verification environment here contains verification components and other verification models, checkers, and monitors that can be written using any HVL to speed up the development time and create lots of reusable components. But the performance of the whole simulation environment is limited by simulator speed. Shown as RED lines, the data interfaces are the ones where there is a lot of activity. Going with a hardware acceleration method where the DUT portion can be mapped to an FPGA, the lines in RED become the bottleneck in terms of interaction between the hardware and the simulator. The more logic you can add in the hardware box to simulate, the less will be the simulators’ overhead, resulting in improving the overall simulation performance.

In the proposed environment using “Synthesizable Verification Components,” some of the verification components will be mapped into the hardware as shown in the figure below.

Figure 3. Verification Environment with Synthesizable Verification Component

The verification component along with the DUT will be mapped inside the FPGA, reducing simulators’ overhead. In this case, the data transfer between the verification environment and the DUT takes place inside the FPGA.

In case of PCI Express, the MAC layer performs most of the initialization tasks that are independent of the above layers, which can be easily mapped into hardware. The DLL layer also deals with some part of initialization, along with retry mechanism, which can also be mapped into the hardware, since the retry mechanism can take good amount of bandwidth during simulation.

The following diagram shows the basic elements of an HW/SW interface block.

Figure 4. Hardware/Software Interface Block

This block consists of a protocol dependent interface, which talks to a Synthesizable Verification Component on one side and some standard interface on the other side. The software interaction with the hardware is done through the DMA to reduce the interaction between the software and hardware interface. The software dumps the instructions for the Synthesizable Verification Component in the command FIFO, and read or write of data via the data FIFO takes place using DMA operations.

Verification engineers need to look from a non-conventional point of view in order to design a Synthesizable Verification Component. All the luxury of non-Synthesizable constructs in the languages disappears, and one has to deal with real hardware scenarios for a verification component. The most crucial part is the architecture for the partition of the Synthesizable and non-Synthesizable portions of the verification components. This will be the most important part affecting overall performance.

As we go towards system level simulation, the debug time increases steadily. The “Synthesizable Verification Components” are a good fit for verification after 90% of the design is mature. To find the remaining 10% of the bugs, the verification time can be easily reduced using “Synthesizable Verification Components.” In terms of reusability, all Synthesizable components can also be used from day one for verification under the conventional simulation method. Only when it approaches a mature design can the components migrate to the FPGA system, keeping the verification process transparent on migration to hardware.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured chalk talk

Power High-Performance Applications with Renesas RA8 Series MCUs
Sponsored by Mouser Electronics and Renesas
In this episode of Chalk Talk, Amelia Dalton and Kavita Char from Renesas explore the first 32-bit MCUs based on the new Arm® Cortex® -M85 core. They investigate how these new MCUs bridge the gap between MCUs and MPUs, the advanced security features included in this new MCU portfolio, and how you can get started using the Renesas high performance RA8 series in your next design. 
Jan 9, 2024
27,406 views