feature article
Subscribe Now

Accelerating ASIC Verification with FPGA Verification Components

With the ever-increasing size and density of ASIC, conventional simulation-based verification has become a bottleneck in the project development cycle. In conventional verification, the simulation time steadily increases as the design matures in terms of bug count.

The verification community has resorted to different methodologies to overcome this. They are trying to reduce the development time by introducing Verification Components and Hardware Verification Languages (HVL). These help in terms of reusability but do not attend to the issue of simulation time. On one side, where the HVL provides better features such as higher level of abstraction and better randomization, the normal simulation time increases significantly. The additional random generation logic and higher level of abstraction, along with PLI calls, reduces the simulation speed. As shown in Figure 1, as the design matures, it becomes tougher to find bugs, resulting in longer simulation time. Now randomization has increased to find more bugs using HVL, which increases the simulation time and overall development time.

Hardware-acceleration-based verification methodology can be used to solve the verification time issue. With hardware acceleration, the DUT is mapped inside an FPGA which helps reduce the simulation time, since the actual hardware runs without any overhead on simulators. But with the size and complexity of today’s chips, hardware acceleration is limited in terms of reducing the simulation time.

Figure 1. Conventional Simulation Time

The performance-limiting factor here is interaction between the Testbench and the Design on FPGA. The time spent in Testbench during simulation is the main cause of low performance. The advantage of putting the Design on an FPGA can be extended to the Testbench by porting the Bus Interface part of verification components on FPGA. This is where the new era of “Synthesizable Verification Components” begins. This approach can easily multiply the simulation speed and reduce the design cycle to a great extent.

Consider an example of verification for a PCI Express Interface Design with standard PIPE interface. The following figure shows a typical verification environment for verifying a PCI Express-based DUT with PIPE interface.

Figure 2. Conventional Verification Environment

The verification environment here contains verification components and other verification models, checkers, and monitors that can be written using any HVL to speed up the development time and create lots of reusable components. But the performance of the whole simulation environment is limited by simulator speed. Shown as RED lines, the data interfaces are the ones where there is a lot of activity. Going with a hardware acceleration method where the DUT portion can be mapped to an FPGA, the lines in RED become the bottleneck in terms of interaction between the hardware and the simulator. The more logic you can add in the hardware box to simulate, the less will be the simulators’ overhead, resulting in improving the overall simulation performance.

In the proposed environment using “Synthesizable Verification Components,” some of the verification components will be mapped into the hardware as shown in the figure below.

Figure 3. Verification Environment with Synthesizable Verification Component

The verification component along with the DUT will be mapped inside the FPGA, reducing simulators’ overhead. In this case, the data transfer between the verification environment and the DUT takes place inside the FPGA.

In case of PCI Express, the MAC layer performs most of the initialization tasks that are independent of the above layers, which can be easily mapped into hardware. The DLL layer also deals with some part of initialization, along with retry mechanism, which can also be mapped into the hardware, since the retry mechanism can take good amount of bandwidth during simulation.

The following diagram shows the basic elements of an HW/SW interface block.

Figure 4. Hardware/Software Interface Block

This block consists of a protocol dependent interface, which talks to a Synthesizable Verification Component on one side and some standard interface on the other side. The software interaction with the hardware is done through the DMA to reduce the interaction between the software and hardware interface. The software dumps the instructions for the Synthesizable Verification Component in the command FIFO, and read or write of data via the data FIFO takes place using DMA operations.

Verification engineers need to look from a non-conventional point of view in order to design a Synthesizable Verification Component. All the luxury of non-Synthesizable constructs in the languages disappears, and one has to deal with real hardware scenarios for a verification component. The most crucial part is the architecture for the partition of the Synthesizable and non-Synthesizable portions of the verification components. This will be the most important part affecting overall performance.

As we go towards system level simulation, the debug time increases steadily. The “Synthesizable Verification Components” are a good fit for verification after 90% of the design is mature. To find the remaining 10% of the bugs, the verification time can be easily reduced using “Synthesizable Verification Components.” In terms of reusability, all Synthesizable components can also be used from day one for verification under the conventional simulation method. Only when it approaches a mature design can the components migrate to the FPGA system, keeping the verification process transparent on migration to hardware.

Leave a Reply

featured blogs
Jun 2, 2023
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....
Jun 2, 2023
Explore the importance of big data analytics in the semiconductor manufacturing process, as chip designers pull insights from throughout the silicon lifecycle. The post Demanding Chip Complexity and Manufacturing Requirements Call for Data Analytics appeared first on New Hor...

featured video

Synopsys Solution for RTL to Signoff Power Analysis

Sponsored by Synopsys

Synopsys’ industry-leading power analysis solution built on PrimePower technology that enables early RTL exploration, low power implementation and power signoff for design of energy-efficient SoCs.

Learn more about Synopsys’ Energy-Efficient SoCs Solutions

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

Power Multiplexing with Discrete Components
Sponsored by Mouser Electronics and Toshiba
Power multiplexing is a vital design requirement for a variety of different applications today. In this episode of Chalk Talk, Amelia Dalton chats with Talayeh Saderi from Toshiba about what kind of power multiplex solution would be the best fit for your next design. They discuss five unique design considerations that we should think about when it comes to power multiplexing and the benefits that high side gate drivers bring to power multiplexing.
Sep 22, 2022
30,607 views