feature article
Subscribe Now

Faster Floating-Point

Altera Smooths Path to Floating-Point FPGA

We’ve done dozens of articles about how awesome FPGAs are for signal processing applications – with a measure of salt.  We’ve pointed and laughed as FPGA vendors boasted of their gaggles of GMACs that nobody would ever realize with a practical DSP design.  We raised an eyebrow when they told us how easy their DSP design flow for FPGAs was – heck, even a software guy could do it. Not. We even scrutinized (with suspicion) their high-level synthesis methodologies, and were typically less than flabbergasted at the complexity that sat right beneath the surface. 

Over time, however, DSP-on-FPGA has become a pretty well-worn and successful path.  Even DSP processor stalwarts like BDTi gave high praise to the capabilities of FPGAs as DSP machines – capable of much higher throughputs on dramatically lower power budgets, and codable with comparable effort – when compared to the more complex software-programmed DSPs.  It seemed that FPGAs were earning their stripes as go-to devices for tough signal processing applications.

Except for one thing.

Hidden in the fine print was the double-asterisk footnote that said that you really needed to convert your algorithm to fixed-point to be able to behold the beauty of FPGA fantasticness.  That’s no problem, right?  It just requires you to find the appropriate… huh, dang, maybe it’s not so simple after all.  In fact, there have even been start-ups whose whole charter was to develop software tools to assist in converting algorithms from software-esque floating-point to hardware-friendly fixed-point implementations – and doing the complicated analysis to see what fidelity you lost in the process. 

For many designs, the complexity of adapting them to fixed-point was rewarded with blazing fast speeds and incomparable power efficiency.  The big rewards on the other side of the quantization chasm were enough to lure designers into taking the plunge, doing the big math, and figuring out where to put the decimal point.  Some designs, however, don’t lend themselves to floating-to-fixed-point transformation – no matter how badly we want to use FPGAs.  Algorithms like those in linear algebra – that require high dynamic range and are extra-sensitive to the types of errors introduced by quantization – really need floating-point implementations to work properly.

“Never fear,” said the FPGA companies.  “You can do floating point with our FPGAs, too!  Just, uh, take the hard-wired multipliers and add some exponent manipulation to the side using the FPGA fabric and work out the tricky bits with the design flow, and uh, uh-oh,… Actually, the details are left as an exercise for the design team.  Good luck!”

“Good luck” is what you would need, too – because getting floating-point working on most FPGAs is a Rube-Goldbergian exercise in non-linear engineering.  The FPGAs and the IP were not really designed with floating point in mind, and that fact becomes painfully clear in both the ease of implementation and the performance.  

Altera has changed all that, however.  We wrote just over a year ago that they had done some serious remodeling on their DSP blocks – featuring a change from the venerable 18×18 multiplier to a more versatile variable-precision DSP block.  That enhancement, it turns out, was just the first shoe to drop.  Most DSP-savvy folk probably guessed that the next thing to come down the pike would be full-fledged floating-point support.  Good guess, DSP-savvy folk! 

Now, Altera has done the heavy lifting for us and, as a bonus, BDTi has even evaluated the resulting design flow and hardware implementations – with a pretty big thumbs-up.  Floating-point math using FPGAs is now a practical reality.  Altera’s approach was to add floating-point blocks to their DSP Builder Advanced Blockset with a comprehensive design and verification flow built around them.  Altera’s approach is similar to other model-based DSP design flows – allowing blocks to be stitched together in tools like Simulink from Mathworks.  However, Altera goes one (important) step further in optimizing the datapath across multiple blocks, eliminating much of the overhead associated with block-based algorithm design.  As the datapath is assembled, Altera’s tool automatically chooses the level of normalization required to match up the exponents from one stage to the next.  The result is a significant reduction in the extra hardware required for normalization and de-normalization. 

BDTi’s evaluation of the Altera flow was done with production tools and hardware, and it consisted of the implementation and evaluation of a Cholesky solver, which “finds the inverse of a Hermitian positive definite matrix to solve for the vector x in a simultaneous set of linear equations of the form Ax = B.”  If you’re not current on your Hermitian positive definitive matrices – it’s a decent example for evaluating a floating-point design flow and the resulting hardware performance.  The target FPGA was an Altera Stratix IV (not even the latest-generation Stratix V), and BDTi was able to get outstanding results (our words, not theirs), both from the design flow and from the hardware.  You can read the full BDTi report in a white paper here.

There has always been a gap between the design effort and the complexity for implementing a complex, performance-demanding algorithm in a conventional processor like a DSP versus implementing that same algorithm in FPGA hardware.  In the old days, we characterized the difference as “10x the design effort for 10x the performance.”  However, that gap has closed significantly in recent years.  For one thing, getting the most performance out of a modern DSP processor requires coding skill and knowledge of the underlying hardware that far exceeds “normal” software engineering skill.  As DSP processors have gotten more complex, getting the most performance out of them has become more complex as well.  FPGAs, on the other hand, have gotten continuously easier to use for DSP design.  With the big signal-processing market out there just beckoning, the FPGA companies (as well as 3rd-party tool suppliers) have worked hard to take the pain out of the hardware-intensive design process for DSP-on-FPGA implementations.  The latest step in that evolution could prove to be one of the biggest, as it could remove that last nagging footnote that says “fixed-point only” from the FPGA brag-sheet for FPGA DSP performance. 

Altera says their floating-point DSP flow is available now in their standard tools and current FPGA products.  Further enhancements will follow with future versions of the tool flow and with the just-hitting-the-market 28nm FPGAs.

Leave a Reply

featured blogs
Apr 18, 2024
Analog Behavioral Modeling involves creating models that mimic a desired external circuit behavior at a block level rather than simply reproducing individual transistor characteristics. One of the significant benefits of using models is that they reduce the simulation time. V...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

Power High-Performance Applications with Renesas RA8 Series MCUs
Sponsored by Mouser Electronics and Renesas
In this episode of Chalk Talk, Amelia Dalton and Kavita Char from Renesas explore the first 32-bit MCUs based on the new Arm® Cortex® -M85 core. They investigate how these new MCUs bridge the gap between MCUs and MPUs, the advanced security features included in this new MCU portfolio, and how you can get started using the Renesas high performance RA8 series in your next design. 
Jan 9, 2024
14,178 views