feature article
Subscribe Now

Marrying Flexibility and Complexity

Verifying a DSP in an FPGA

FPGAs are an excellent choice for speeding the silicon realization of complex digital signal processing (DSP) algorithms.  However, the flexibility of the fabric cannot supplant the complexity of the verification.  Traditional FPGA “burn-and-churn” techniques must be replaced with a plan-driven approach that captures the overall verification intent.  From there, the randomized tests constructed with Accellera’s Universal Verification Methodology (UVM™) should be combined with formal analysis to predictably converge the verification. With a suitably sophisticated approach, the DSP-FPGA marriage will be a happy one.

What is Your Intent?

Flexibility is critical to finding the right IP foundation upon which to implement your DSP function, but locking down that DSP IP configuration is just as critical to achieving convergence and closure.  DSP algorithms often are implemented as a combination of digital hardware, embedded software, and an interface to the analog sensors. The ability to make tradeoffs among these elements enables engineers to fit their requirements to the performance and capacity of their target FPGA devices.  It is the capacity aspect that is the double-edged sword.  While it enables a more complex design, it also entails a need to create a verification plan to assure that the design intent is implemented properly.

Let’s take Altera’s IP and DSP design systems as an example.  With tools like SOPC Builder and the DSP Builder, you can configure and assemble IP quickly.  From there you can generate a Verilog or VHDL description for simulation and implementation.  Following through the programming flow, you can do “what if” tests all the way to the lab bench at full system speed – something not possible at the same point in the ASIC flow, which is why FPGAs are so attractive for DSP implementation.

This rapid design cycle can help you close the DSP IP configuration quickly, but it leaves many questions open in the verification intent.  While design complexity grows linearly, verification complexity grows exponentially at 2x, where x is the amount of storage (hardware registers and software memory) in the system.  The implication is that “smoke tests” used to check the design configuration may be entirely insufficient to close the verification.

Returning to the DSP system design itself – hardware plus software plus sensors – the value of a unified verification plan becomes evident.  It is only in the verification plan that design intent is mapped to verification intent so that closure can be automated. Using the metric-driven verification (MDV) approach, engineers can map the verification plan to specific metrics such as those derived from functional coverage, setting up an environment in which verification progress can be monitored and the quality of the overall system can be measured.  The metrics can be generated from simulators running Verilog and VHDL code generated by tools like the SOPC and DSP Builders, but it also can come from formal analysis of the state machines that implement the DSP control systems or from checking the connectivity of the purchased and custom IP that constitute the DSP implementation.  As the complexity of the overall DSP function grows, traditional simulation combined with bench testing must be replaced with a plan-driven approach. 

Cadence-Figure1.jpg

Figure 1:  Metric Driven Verification Expanding to Cover Functions Needs for a DSP System
Source: Cadence Design Systems, Inc.

 

Abstraction and Obfuscation

As an experienced FPGA designer, you may be saying: “Of course I have a plan.  I just need to spend more time in the lab executing that plan for my next DSP.”  If verification complexity directly tracked design complexity, this statement would be true, but the exponential nature of verification means that you will soon find a project where the complexity obfuscates bugs.

The main issue in bench-based verification is observability.  Both Xilinx and Altera offer the ability to probe internal signals in their devices and bring that information to the pins.  This capability is both useful and necessary, but what if the problem you see on the pins is a result of an error that occurs thousands of cycles earlier in an IP block that is seemingly unrelated to the one in which the problem is manifest?  Furthermore the signals are typically just that – individual signals – from which the engineer must assemble a temporal transaction frame to understand the data represented by those signals.  The separation of the error location and actual bug in terms of location, time, and abstraction causes the “burn-and-churn” cycle of moving probes, reprogramming the device, rerunning the bench tests, examining results, and then moving the probes again.  This cycle can make bench-based debug an open-ended process.

Directed-random testbenches operating at the transaction level of abstraction and leveraging the new Accellera UVM standard are key to breaking the cycle.  As the hardware, software, and sensor engineering teams negotiate how the DSP will react to different types of input, the agreements can be codified in terms of transaction sequences.  These sequences can be randomized to cover the rapidly growing state space and monitored using assertions and functional coverage.  When run in simulation, the added observability is obvious as the entire system can be probed at any time, assuming the IP is not encrypted.  The assertions take observability one step further by providing an early-warning system for bugs as well as localization.  These “internal sensors” constantly monitor either nominal or error operations inside IP and at IP interfaces, which is key to identifying the true source of a bug, especially when the source and manifestation are dislocated in time.  Furthermore, the combined UVM and assertion model can be extended to both the software environment and the analog sensors through transaction-level modeling for the former and real-number modeling in a digital simulator for the latter.

Convergence Ends Burn-and-Churn

As mentioned earlier, the greatest challenge to implementing complex DSPs in FPGAs is burn-and-churn.  This is the seemingly endless lab cycle of making design changes and reprogramming the FPGA while staying within the fixed PCB implementation.  If the PCB must be changed, it can add days or weeks to the debug cycle.  Solutions like Xilinx’s DSP Platform solutions can help address some of these challenges.

By marrying abstraction, simulation, and implementation into a coherent flow, the DSP Platform opens the door to advanced verification.  It enables rapid design exploration and supports the simulation environment needed to improve observability.  Engineers can add the UVM sequences, assertions, real-number models, and software debug needed validate the overall system.  Combined with bench testing, the overall solution enables rapid convergence to bug-free tests.

Sounds exciting – but how do you know that the tests you are running are the right ones to measure system quality?  That is where the process circles back to the verification plan.  Using functional coverage metrics, the completeness of the test suite can be measured against the intent agreed to by the sensor, FPGA DSP, and software teams.  The predictability, productivity, and quality enabled by the metric-driven verification replace burn-and-churn with a more efficient approach to silicon realization of DSP functions in FPGAs.

Cadence-Figure2.jpg

Figure 2:  Methodology Shift from Burn-and-Churn to UVM Verification 
Source: Cadence Design Systems, Inc.

 

Summary and Next Steps

For FPGA engineers accustomed to working in a traditional bench-based verification flow, the methodology shift described in this paper may be both enticing and daunting.  The key to adopting these techniques is to follow the path blazed by ASIC engineers.  The first step is to add assertions into the Verilog and VHDL code.  These can often be instantiated from a library, and they enable the observability that will help localize bugs.  Assertions also enable formal analysis and automatically generate functional coverage that can be used as metrics in the verification plan.  Sequences built with UVM augment the environment with reusable verification IP that can dramatically reduce the verification effort in subsequent projects.  Finally, adding analog and software abstractions to create a comprehensive system view will bring together all of the engineering disciplines working on the DSP.  One example for visualizing this complete environment is the Incisive® Verification Kit, which includes both IP and documentation representing a complete, complex system.  By adding advanced verification, the marriage of flexibility and complexity for FPGA DSP designs will be a long and happy one.

Cadence-Figure3.jpg

Figure 3:  Incisive Verification Kit for Advanced FPGA Verification Methodologies
Source: Cadence Design Systems, Inc.

 

About the Author:  Adam Dale Sherer, Verification Product Management Director, Cadence Design Systems, Inc.

Adam markets the UVM and the multi-language verification simulator for Cadence, tapping 19 years of experience in verification and software engineering including roles in marketing, product management, applications engineering, and R&D. Adam is the secretary of the Accellera Verification IP Technical Subcommittee (VIPTSC) which has standardized the UVM.  Adam blogs on verification subjects at http://www.cadence.com/community/fv/ and tweets on them @SeeAdamRun.

  • MS EE from the University of Rochester, with research published in the IEEE Transactions on CAD
  • BS EE and BA CS from SUNY Buffalo

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Introducing the Next Generation in Electronic Systems Design
In this episode of Chalk Talk, David Wiens from Siemens and Amelia Dalton explore the role that AI, cloud connectivity and security will play for the future of electronic system design and how Siemens is furthering innovation in this area with its integrated suite of design tools.
Dec 3, 2024
1,200 views