feature article
Subscribe Now

Co-Verification Methodology for Platform FPGAs

The emergence of affordable high-end FPGAs is making them the technology of choice for an increasing number of electronics products that previously were the exclusive domain of ASICs. Offering unprecedented levels of integration on a single chip, today’s programmable devices have widely expanded the size, scope, and range of applications that can now be deployed on them .

To ensure a fast and efficient implementation of these advanced, feature rich FPGAs, designers need access to the latest in productivity enhancing electronic design automation (EDA) tools and methodologies. For years, hardware/software (HW/SW) co-verification has been commonly used to debug ASIC SoC designs. Now, with embedded processors such as the PowerPC405 from IBM, combined with multi-million gate capacities commonplace in Virtex series FPGAs, there is an increased relevance for ASIC-strength methodologies such as co-verification to add value in the FPGA design space.

The Debug Challenge

By various accounts, design verification is the most serious bottleneck that engineers face in delivering multi-million gate SoCs. In the case of ASICs, it is not uncommon for verification teams to spend as much as 50 to 70 percent of their time in verification and debug. In FPGAs, where the penalty of a design error is not as severe and a design respin is a matter of hours not months, there is, nonetheless, still an obvious need to introduce efficient debug methodologies that enable design teams to identify and fix errors early on in the process.

In the particular instance where a processor is part of a design, the interface between hardware and software becomes an area of increased focus and attention. Validating that the hardware and software will function correctly together can become an important aspect in the overall verification process. It is therefore essential that specialized methodologies such as HW/SW co-verification be available to FPGA designers, enabling them to achieve not only a higher debug efficiency but also a more streamlined approach to verification of their processor-based designs.

Co-Verification Simplifies the Debug Equation

The basic concept behind co-verification is to merge the respective debug environments used by hardware and software teams into a single framework. This provides designers with concurrent and early access to both the hardware and software components of the designs, thereby contributing to reducing the overall project cycle time.

From a performance perspective, processor models known as Instruction Set Simulators (ISS) can significantly speed up processor simulation execution when compared to using a register transfer level (RTL) model of the CPU. Moving up a level of abstraction enables engineers to verify large embedded processor-based FPGA systems – systems that could not otherwise be verified within a practical timeframe using conventional HDL simulation.

In addition, an efficient co-verification tool can help uncover a range of HW/SW interface problems, which include:

— Initial startup and boot sequence errors (including RTOS boot)
— Processor and peripheral initialization and configuration problems
— Memory accessing and initialization problems
— Memory map and register map discrepancies
— Interrupt service routine errors

The Advantages of Co-Verification

By uniting the hardware and software simulation environments in a processor-based system, a co-verification tool can be conceptually viewed as an extension of traditional “functional simulation” in logic-only designs. The co-verification concept establishes value for multiple design teams including hardware engineers (peripheral logic debug), embedded software engineers (SW application and firmware debug), and system designers (performance analysis and tuning).

To fully realize the advantages of co-verification, there are three prerequisites the design under test must meet:

– The system includes a processor executing software code as part of the design.

– There is extensive interaction between software and hardware parts of the design during execution.

– Both hardware and software engineering teams agree on using co-simulation early in the design stage.

These prerequisites increase the likelihood of achieving a smooth methodology flow and a common communication medium between the two teams. Once the above requirements are met, co-verification offers several key benefits when compared to using simulation alone.

1. Faster Performance

Pure logic simulation can be used to simulate a design with a processor component. This is accomplished by including an RTL model of the processor to simulate the software code. This approach, however, is painfully slow and insufficient when addressing all but the most basic debug requirements. The overall simulation speed is generally in the sub-100 Hz range. The bottlenecks in simulation are due to the accurate, but slow, logic simulators. When the software needs to communicate with hardware, the transaction must go through a logic simulator.

In comparison, co-verification is able to run simulation orders of magnitude faster. This speed-up is achieved through several methods including the use of faster processor models, instruction set simulators. The ISS significantly increases the simulation speed of the processor, but this alone is not enough. Bottlenecks still remain because the software running on the ISS is much faster than the hardware running on the slow logic simulators. Consequently, the software and ISS are always waiting for the hardware and logic simulator to catch up. Advanced co-verification suites, such as Mentor Graphics® Seamless® FPGA, bypass this fundamental limitation by introducing the concept of a coherent memory server (CMS). Using the CMS, the ISS is able to read and write to memory about 10,000 times faster than if it had to go through a logic simulator. Given that processor-to-logic interaction is mostly through read-write cycles to memory – fetching instructions, accessing peripheral registers, and such – the overall simulation speed is dramatically increased by diverting most routine CPU-to-memory transactions to run through the faster CMS instead of through the logic simulator.

Only the transactions from processor to memory that are under active debug run through the precise but slow logic simulator. Typically, this means the simulator bottleneck is only a factor in less than one percent of the software-hardware transactions, thus providing a significant overall throughput advantage versus pure RTL simulation.

2. Increased Comprehension

To efficiently address debugging problems that span multiple teams, designers need tools and methodologies that fit the specific needs of each team. For example, the SW team would find debugging processor code on a logic simulator to be inherently inefficient and impractical.

With advanced co-verification tools such as Seamless FPGA where a cycle-accurate ISS model replaces the RTL processor model, a symbolic source level debugger can be attached to the ISS, making possible an interactive and intuitive software debug environment. Some of the standard features of a software debugger include the ability to step through source code (C and assembly), set breakpoints, and observe register and memory contents. Thus, the introduction of a symbolic debugger now provides greater control and comprehension to the designer than would be possible trying to debug processor code using an HDL processor model running on a logic simulator.

3. Support for Abstract Models

Often times when very high data throughput is required to validate certain design functions, RTL models have to be replaced with faster, more abstract behavioral models. These high-speed models, usually written in C or C++, interact with the ISS at very high speeds allowing for complex protocols to be rapidly and comprehensively tested.

Seamless FPGA allows users to plug in these behavioral models through a “C-Bridge” interface technology. By working less in the logic simulator and more with higher-level models, verification speeds can deliver significant performance gains. With increased simulation throughput, the virtual platform can now offer visibility into system performance and architectural trade-off issues at a very early stage in the design process. Designers can quickly validate functionality, while also analyzing and tuning important system attributes, such as bus bandwidth, latency, and contention – all leading to increased system performance.

Additional FPGA Co-Verification Benefits

With access to co-verification technology, processor-based designs are not only easier to debug, but the process starts much earlier in the design cycle, which makes it more likely the design project will be completed earlier – with minimized risk of surprises later on.

Finding Problems Earlier

Design teams are highly motivated to identify and fix problems at an early stage in the design cycle. A well-known axiom states: “The earlier a problem can be identified, the easier and cheaper it is to fix.” Typically, designers cannot initiate software verification until a hardware prototype is available. As a consequence, when software verification occurs in a serial manner, HW/SW interaction problems may not be detected until much later in the design stage.

A virtual prototyping and debug environment removes this restriction by enabling product integration ahead of board and device availability, or even before the final design is committed. With co-verification, software teams do not have to wait for silicon before they can start developing and testing their portions of the design. As a result, problems can be found earlier and the time to working silicon is dramatically reduced.

Simplified Testbenches

To verify design functions, hardware engineers often write elaborate HDL testbench routines. These testbenches can become very complex, and it is not uncommon for the testbench code size to approach that of the design itself. With co-verification, the ISS processor model allows testbenches to be greatly simplified.

For hardware verification engineers testing protocols and device drivers, testbenches are simplified because actual embedded software code – and not contrived testbench code – is driving the hardware circuits.

Similarly, software engineers do not have to resort to writing stub code. Actual hardware devices provide real-life responses to calls made to hardware. Overall, this leads to fuller and more comprehensive test coverage, leading in turn to increased confidence in the working of the design in silicon in the first instance.

Ability to “Freeze” and Control Runtime

An important attribute of debugging in the virtual domain is the ability to “stop time.” As a result, it is possible to simultaneously observe and modify the internal values of the CPU registers, as well as those of the hardware device registers with which the processor is communicating. Freezing and synchronizing the hardware and software domains offers the ultimate in control and observability – and it is invaluable in efficiently helping debug complex and intricate transactions.

Programmable Silicon Complements Co-Verification

The chances for first-time success with a design are greatly increased by early integration and testing in the virtual prototype domain.

However, there are classes of problems involving behavior that can only be captured when the processor runs at full speed. In this regard, platform FPGAs serve as a perfect complement to virtual platform debug techniques. Designs can be downloaded into FPGA silicon for validation at full system speeds. If problems escaped earlier attention, the designer can debug in-system with the Xilinx ChipScope™ Pro interactive logic analyzer or go back to the co-verification environment for a more controlled analysis. Design errors can be fixed and re-implemented in silicon without incurring the huge delays and costly mask re-spins common with ASIC design flows.

Conclusion

The current generation of Xilinx Platform FPGAs with powerful RISC processors and multi-million gate capacities requires powerful and matching co-verification methodologies. With the introduction of Seamless FPGA, FPGA designers now have access to an ASIC-strength, best-in-class debug solution. The tool provides an efficient and easy-to-use methodology that can integrate, verify, and debug hardware and software interactions very early in the design cycle – preserving and enhancing the critical time-to-market advantage of large-scale platform FPGAs.

Leave a Reply

featured blogs
Oct 5, 2022
The newest version of Fine Marine - Cadence's CFD software specifically designed for Marine Engineers and Naval Architects - is out now. Discover re-conceptualized wave generation, drastically expanding the range of waves and the accuracy of the modeling and advanced pos...
Oct 4, 2022
We share 6 key advantages of cloud-based IC hardware design tools, including enhanced scalability, security, and access to AI-enabled EDA tools. The post 6 Reasons to Leverage IC Hardware Development in the Cloud appeared first on From Silicon To Software....
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

MOTIX™ Motor Control Solutions

Sponsored by Mouser Electronics and Infineon

Today’s complex automotive designs require a wide range of motor control and system ICs to deliver the features that customers demand. In this episode of Chalk Talk, Michael Williams from Infineon joins me to explore how Infineon’s MOTIX™ motor control solutions can help simplify your next automotive design. We take a closer look at the MOTIX™ Embedded power system on chip for motor control, the benefits that the MOTIX™ Embedded Power IC can bring to your next design, and how you can get started with your next motor control design with Infineon’s MOTIX™ motor control solutions.

Click here for more information about Infineon Technologies TLE986x 2-Phase Motor/Relay Driver ICs