feature article
Subscribe Now

Accelerating ASIC Verification with FPGA Verification Components

With the ever-increasing size and density of ASIC, conventional simulation-based verification has become a bottleneck in the project development cycle. In conventional verification, the simulation time steadily increases as the design matures in terms of bug count.

The verification community has resorted to different methodologies to overcome this. They are trying to reduce the development time by introducing Verification Components and Hardware Verification Languages (HVL). These help in terms of reusability but do not attend to the issue of simulation time. On one side, where the HVL provides better features such as higher level of abstraction and better randomization, the normal simulation time increases significantly. The additional random generation logic and higher level of abstraction, along with PLI calls, reduces the simulation speed. As shown in Figure 1, as the design matures, it becomes tougher to find bugs, resulting in longer simulation time. Now randomization has increased to find more bugs using HVL, which increases the simulation time and overall development time.

Hardware-acceleration-based verification methodology can be used to solve the verification time issue. With hardware acceleration, the DUT is mapped inside an FPGA which helps reduce the simulation time, since the actual hardware runs without any overhead on simulators. But with the size and complexity of today’s chips, hardware acceleration is limited in terms of reducing the simulation time.

Figure 1. Conventional Simulation Time

The performance-limiting factor here is interaction between the Testbench and the Design on FPGA. The time spent in Testbench during simulation is the main cause of low performance. The advantage of putting the Design on an FPGA can be extended to the Testbench by porting the Bus Interface part of verification components on FPGA. This is where the new era of “Synthesizable Verification Components” begins. This approach can easily multiply the simulation speed and reduce the design cycle to a great extent.

Consider an example of verification for a PCI Express Interface Design with standard PIPE interface. The following figure shows a typical verification environment for verifying a PCI Express-based DUT with PIPE interface.

Figure 2. Conventional Verification Environment

The verification environment here contains verification components and other verification models, checkers, and monitors that can be written using any HVL to speed up the development time and create lots of reusable components. But the performance of the whole simulation environment is limited by simulator speed. Shown as RED lines, the data interfaces are the ones where there is a lot of activity. Going with a hardware acceleration method where the DUT portion can be mapped to an FPGA, the lines in RED become the bottleneck in terms of interaction between the hardware and the simulator. The more logic you can add in the hardware box to simulate, the less will be the simulators’ overhead, resulting in improving the overall simulation performance.

In the proposed environment using “Synthesizable Verification Components,” some of the verification components will be mapped into the hardware as shown in the figure below.

Figure 3. Verification Environment with Synthesizable Verification Component

The verification component along with the DUT will be mapped inside the FPGA, reducing simulators’ overhead. In this case, the data transfer between the verification environment and the DUT takes place inside the FPGA.

In case of PCI Express, the MAC layer performs most of the initialization tasks that are independent of the above layers, which can be easily mapped into hardware. The DLL layer also deals with some part of initialization, along with retry mechanism, which can also be mapped into the hardware, since the retry mechanism can take good amount of bandwidth during simulation.

The following diagram shows the basic elements of an HW/SW interface block.

Figure 4. Hardware/Software Interface Block

This block consists of a protocol dependent interface, which talks to a Synthesizable Verification Component on one side and some standard interface on the other side. The software interaction with the hardware is done through the DMA to reduce the interaction between the software and hardware interface. The software dumps the instructions for the Synthesizable Verification Component in the command FIFO, and read or write of data via the data FIFO takes place using DMA operations.

Verification engineers need to look from a non-conventional point of view in order to design a Synthesizable Verification Component. All the luxury of non-Synthesizable constructs in the languages disappears, and one has to deal with real hardware scenarios for a verification component. The most crucial part is the architecture for the partition of the Synthesizable and non-Synthesizable portions of the verification components. This will be the most important part affecting overall performance.

As we go towards system level simulation, the debug time increases steadily. The “Synthesizable Verification Components” are a good fit for verification after 90% of the design is mature. To find the remaining 10% of the bugs, the verification time can be easily reduced using “Synthesizable Verification Components.” In terms of reusability, all Synthesizable components can also be used from day one for verification under the conventional simulation method. Only when it approaches a mature design can the components migrate to the FPGA system, keeping the verification process transparent on migration to hardware.

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

VITA RF Product Portfolio: Enabling An OpenVPX World
Sponsored by Mouser Electronics and Amphenol
Interoperability is a very valuable aspect of military and aerospace electronic designs and is a cornerstone to VITA, OpenVPX and SOSA. In this episode of Chalk Talk, Amelia Dalton and Eddie Alexander from Amphenol SV explore Amphenol SV’s portfolio of VITA RF solutions. They also examine the role that SOSA plays in the development of military and aerospace systems and how you can utilize Amphenol SV’s VITA RF solutions in your next design.
Oct 25, 2023
23,254 views