feature article
Subscribe Now

Faster Floating-Point

Altera Smooths Path to Floating-Point FPGA

We’ve done dozens of articles about how awesome FPGAs are for signal processing applications – with a measure of salt.  We’ve pointed and laughed as FPGA vendors boasted of their gaggles of GMACs that nobody would ever realize with a practical DSP design.  We raised an eyebrow when they told us how easy their DSP design flow for FPGAs was – heck, even a software guy could do it. Not. We even scrutinized (with suspicion) their high-level synthesis methodologies, and were typically less than flabbergasted at the complexity that sat right beneath the surface. 

Over time, however, DSP-on-FPGA has become a pretty well-worn and successful path.  Even DSP processor stalwarts like BDTi gave high praise to the capabilities of FPGAs as DSP machines – capable of much higher throughputs on dramatically lower power budgets, and codable with comparable effort – when compared to the more complex software-programmed DSPs.  It seemed that FPGAs were earning their stripes as go-to devices for tough signal processing applications.

Except for one thing.

Hidden in the fine print was the double-asterisk footnote that said that you really needed to convert your algorithm to fixed-point to be able to behold the beauty of FPGA fantasticness.  That’s no problem, right?  It just requires you to find the appropriate… huh, dang, maybe it’s not so simple after all.  In fact, there have even been start-ups whose whole charter was to develop software tools to assist in converting algorithms from software-esque floating-point to hardware-friendly fixed-point implementations – and doing the complicated analysis to see what fidelity you lost in the process. 

For many designs, the complexity of adapting them to fixed-point was rewarded with blazing fast speeds and incomparable power efficiency.  The big rewards on the other side of the quantization chasm were enough to lure designers into taking the plunge, doing the big math, and figuring out where to put the decimal point.  Some designs, however, don’t lend themselves to floating-to-fixed-point transformation – no matter how badly we want to use FPGAs.  Algorithms like those in linear algebra – that require high dynamic range and are extra-sensitive to the types of errors introduced by quantization – really need floating-point implementations to work properly.

“Never fear,” said the FPGA companies.  “You can do floating point with our FPGAs, too!  Just, uh, take the hard-wired multipliers and add some exponent manipulation to the side using the FPGA fabric and work out the tricky bits with the design flow, and uh, uh-oh,… Actually, the details are left as an exercise for the design team.  Good luck!”

“Good luck” is what you would need, too – because getting floating-point working on most FPGAs is a Rube-Goldbergian exercise in non-linear engineering.  The FPGAs and the IP were not really designed with floating point in mind, and that fact becomes painfully clear in both the ease of implementation and the performance.  

Altera has changed all that, however.  We wrote just over a year ago that they had done some serious remodeling on their DSP blocks – featuring a change from the venerable 18×18 multiplier to a more versatile variable-precision DSP block.  That enhancement, it turns out, was just the first shoe to drop.  Most DSP-savvy folk probably guessed that the next thing to come down the pike would be full-fledged floating-point support.  Good guess, DSP-savvy folk! 

Now, Altera has done the heavy lifting for us and, as a bonus, BDTi has even evaluated the resulting design flow and hardware implementations – with a pretty big thumbs-up.  Floating-point math using FPGAs is now a practical reality.  Altera’s approach was to add floating-point blocks to their DSP Builder Advanced Blockset with a comprehensive design and verification flow built around them.  Altera’s approach is similar to other model-based DSP design flows – allowing blocks to be stitched together in tools like Simulink from Mathworks.  However, Altera goes one (important) step further in optimizing the datapath across multiple blocks, eliminating much of the overhead associated with block-based algorithm design.  As the datapath is assembled, Altera’s tool automatically chooses the level of normalization required to match up the exponents from one stage to the next.  The result is a significant reduction in the extra hardware required for normalization and de-normalization. 

BDTi’s evaluation of the Altera flow was done with production tools and hardware, and it consisted of the implementation and evaluation of a Cholesky solver, which “finds the inverse of a Hermitian positive definite matrix to solve for the vector x in a simultaneous set of linear equations of the form Ax = B.”  If you’re not current on your Hermitian positive definitive matrices – it’s a decent example for evaluating a floating-point design flow and the resulting hardware performance.  The target FPGA was an Altera Stratix IV (not even the latest-generation Stratix V), and BDTi was able to get outstanding results (our words, not theirs), both from the design flow and from the hardware.  You can read the full BDTi report in a white paper here.

There has always been a gap between the design effort and the complexity for implementing a complex, performance-demanding algorithm in a conventional processor like a DSP versus implementing that same algorithm in FPGA hardware.  In the old days, we characterized the difference as “10x the design effort for 10x the performance.”  However, that gap has closed significantly in recent years.  For one thing, getting the most performance out of a modern DSP processor requires coding skill and knowledge of the underlying hardware that far exceeds “normal” software engineering skill.  As DSP processors have gotten more complex, getting the most performance out of them has become more complex as well.  FPGAs, on the other hand, have gotten continuously easier to use for DSP design.  With the big signal-processing market out there just beckoning, the FPGA companies (as well as 3rd-party tool suppliers) have worked hard to take the pain out of the hardware-intensive design process for DSP-on-FPGA implementations.  The latest step in that evolution could prove to be one of the biggest, as it could remove that last nagging footnote that says “fixed-point only” from the FPGA brag-sheet for FPGA DSP performance. 

Altera says their floating-point DSP flow is available now in their standard tools and current FPGA products.  Further enhancements will follow with future versions of the tool flow and with the just-hitting-the-market 28nm FPGAs.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

Easily Connect to AWS Cloud with ExpressLink Over Wi-Fi
Sponsored by Mouser Electronics and AWS and u-blox
In this episode of Chalk Talk, Amelia Dalton, Lucio Di Jasio from AWS and Magnus Johansson from u-blox explore common pitfalls of designing an IoT device from scratch, the benefits that AWS IoT ExpressLink brings to IoT device design, and how the the NORA-W2 AWS IoT ExpressLink multiradio modules can make retrofitting an already existing design into a smart AWS connected device easier than ever before.
May 30, 2024
34,333 views