feature article
Subscribe Now

Best Practices for FPGA Prototyping of MATLAB and Simulink Algorithms

As the complexity of modern FPGAs and ASICs increases, engineers are discovering that verification using HDL simulators alone is not enough to fully test system-level design requirements in an efficient and timely manner. 

Many engineers are now deploying FPGAs for algorithm acceleration and prototyping. Using FPGAs to process large test data sets enables engineers to rapidly evaluate algorithm and architecture tradeoffs and test designs under real-world scenarios without incurring the heavy time penalty associated with HDL simulators.   System-level design and verification tools like MATLAB and Simulink enable engineers to realize these benefits by rapidly prototyping their algorithms on FPGAs.

This paper describes Model-Based Design best practices for creating FPGA prototypes with MATLAB and Simulink. The best practices are listed below and highlighted in Figure 1.

(1)   Analyze the effect of fixed-point quantization early in the design process and optimize  the word length to yield smaller and more power-efficient implementations

(2)   Use automatic HDL code creation to produce FPGA prototypes faster

(3)   Reuse system level test benches with HDL cosimulation to analyze HDL implementations using system-level metrics

(4)   Accelerate verification with FPGA-in-the-loop simulation

fig1.png

Why prototype on FPGAs?

Prototyping algorithms on an FPGA increases confidence that the algorithm will work in the real world.  In addition to running test vectors and simulation scenarios at speed, engineers can use FPGA prototypes to exercise software functionality and adjacent system level functions, such as RF and analog subsystems.

Moreover, because FPGA prototypes run faster, larger data sets can be used, potentially exposing bugs that would not be uncovered by a simulation model.

Model-Based Design using HDL code generation enables teams to produce the first prototype faster than a manual workflow, as illustrated in Figure 2. In addition, this approach lets engineers make algorithm changes at the system level, rather than at the implementation level, which accelerates hardware iterations.

fig2.png

Digital Down Converter Case Study

To illustrate best practices for FPGA prototyping using Model-Based Design, a digital down converter (DDC) will be used as a case study. A DDC is a common building block in many communications systems (see Figure 3). It transforms a high-rate passband input, which requires substantial computational resources to process, to a low-rate baseband output, which can be easily processed by less computationally-intensive DSP algorithms. 

fig3.png

The main components of a DDC are (see Figure 4):

–        Numerical controlled oscillator (NCO)

–        Mixer

–        Digital filter chain

fig4.png

Best Practice #1 – Analyze the Effect of Fixed-Point Quantization Early in the Design Process

Engineers typically test new ideas and develop initial algorithms using floating-point data types. Hardware implementation in FPGAs and ASICs, however, requires a conversion to fixed-point data types, which often introduces quantization errors.  In a manual workflow, fixed-point quantization is usually performed during the HDL coding process. In this workflow, the engineer cannot easily quantify the effect of fixed-point quantization by comparing the fixed-point representation to a floating-point reference. Nor is it easy to analyze the HDL implementation for overflows. 

To make intelligent decisions on the required fraction lengths, engineers need a way to compare the floating-point simulation results against fixed-point simulation results before starting the HDL coding process. Increasing the fraction length reduces quantization errors, however, such increases does mean that word length needs to be increased (more area and more power consumption).

fig5.png

For example, Figure 5 illustrates the differences between the floating-point and fixed-point simulation results for stage 1 of the low pass filter in the DDC filter chain.  These differences are due to fixed-point quantization. The figure on the top show both the floating point and fixed point simulation result overlayed. The bottom figure shows the quantization error at every point in the plot. Depending on the design specification, engineers may need to increase fraction lengths to reduce the introduced quantization error.

In addition to selecting a fraction length, engineers must optimize the word length to achieve low-power and area-efficient designs. 

fig6.png

In the DDC case study, Simulink Fixed Point was used to reduce the word length of parts of the digital filter chain by as many as 8 bits (see Figure 6).

Best Practice #2 – Use Automatic HDL Code Creation to Produce FPGA Prototypes Faster

HDL code is required to produce an FPGA prototype. Traditionally, Verilog or VHDL code has been written by hand. As an alternative, generating HDL code automatically using Simulink HDL Coder offers several important benefits. Engineers can:

–        Quickly assess if the algorithm can be implemented in hardware

–        Rapidly evaluate different algorithm implementations and choose the best one

–        Prototype algorithms on FPGAs faster

For the DDC case study, we generated 5780 lines of HDL code within 55 seconds. The code is readable and readily understood by engineers (see Figure 7). Automatic code generation enables engineers to make changes in the system-level model, and produce an updated HDL implementation in minutes by regenerating the HDL code.

fig7.png

Best Practice #3 – Reuse System Level Test Benches For HDL Verification with HDL Cosimulation

HDL cosimulation enables engineers to reuse Simulink models to drive stimuli into the HDL simulator and perform system-level analysis of the simulation output interactively (Figure 8).

fig8.png

Whereas HDL simulation provides only digital waveform output, HDL cosimulation provides complete visibility into the HDL code as well as access to the full suite of system-level analysis tools of Simulink.  When a difference between expected results and HDL simulation results is observed, cosimulation enables engineers to better understand the system level effect of the mismatch.

For example, in Figure 9, the spectrum scope view enables the engineer to make an informed decision to ignore the mismatch between the expected results and HDL simulation results because the differences lie in the stop-band.. The digital waveform output, in contrast, just flags the mismatch in expected results and HDL simulation results as an error. The engineer may eventually arrive at the same conclusion, but it would take more time to complete the required analysis.  

fig9.png

Best Practice #4 – Accelerate Verification with FPGA-in-the-loop Simulation

Having been verified via HDL simulations or HDL cosimulations, the DDC algorithm is now ready to be deployed on an FPGA target platform.   FPGA-based verification, also referred to as FPGA-in-the-loop simulation, of the algorithm increases confidence that the algorithm will work in the real world.  It enables engineers to run test scenarios faster than traditional HDL simulation.

For the DDC algorithm, the Simulink model is used to drive FPGA input stimuli and to analyze the output of the FPGA (Figure 10).  As with HDL cosimulation, the results are available in Simulink for analysis. 

fig10.png

Table 1 compares the two verification methods, HDL cosimulation and FPGA-in-the-loop simulation, used for the DDC design.

In this case, FPGA-in-the-loop simulation was 23 times faster than HDL cosimulation. Such speed increases enable engineers to run more extensive sets of test cases and perform regression tests on their designs.  This enables them to identify potential problem areas that need more detailed analysis.

Though slower, HDL cosimulation provides more visibility into the HDL code. It is, therefore, well suited for more detailed analysis of the problem areas that are found during FPGA-in-the-loop simulation.

table1.png

Summary

Following the four best practices outlined in this article enables engineers to develop FPGA prototypes much faster and with a greater degree of confidence than a traditional, manual workflow.  In addition, engineers can continue to refine their models throughout development and rapidly regenerate code for FPGA implementation. This capability enables much shorter design iterations than a traditional workflow that relies on hand-written HDL.  To learn more about the workflow outlined here or to download a technical kit, visit http://www.mathworks.com/programs/techkits/techkit_asic_response.html

11 thoughts on “Best Practices for FPGA Prototyping of MATLAB and Simulink Algorithms”

  1. Pingback: GVK Biosciences
  2. Pingback: Taruhan Olahraga
  3. Pingback: hash

Leave a Reply

featured blogs
Oct 5, 2022
The newest version of Fine Marine - Cadence's CFD software specifically designed for Marine Engineers and Naval Architects - is out now. Discover re-conceptualized wave generation, drastically expanding the range of waves and the accuracy of the modeling and advanced pos...
Oct 4, 2022
We share 6 key advantages of cloud-based IC hardware design tools, including enhanced scalability, security, and access to AI-enabled EDA tools. The post 6 Reasons to Leverage IC Hardware Development in the Cloud appeared first on From Silicon To Software....
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Small Form Factor Industry Standards for Embedded Computing

Sponsored by Mouser Electronics and Samtec

Trends in today’s embedded computing designs including smart sensors, autonomous vehicles, and edge computing are making embedded computing industry standards more important than ever before. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns from Samtec about how standards organizations like PC104, PICMG and VITA s are encouraging innovation in today’s embedded designs, how Samtec supports each one of these standards organizations and how you can utilize Samtec’s high performance interconnects for your next small-form factor embedded computing designs.

Click here for more information about Samtec Industry Standards Solution