feature article
Subscribe Now

Dialing-in DSP on FPGA

Catapult Customized for Altera

We’ve discussed the amazing potential FPGAs bring to DSP acceleration for years now.  We’re not alone, either.  FPGA vendors have pumped out trumped up performance specifications with dizzying claims as to the number of GMACs (Giga-Multiply-Accumulates per second) their hardware could execute.  So dizzying, in fact, that most of the potential customers got vertigo and fell to the floor without buying any FPGAs.

This was a problem for FPGA vendors – who quickly hooked up probes to the unconscious DSP dudes, downloaded their issues through virtual JTAG ports, and found out (among other things) that whipping out a few lines of algorithmic software for a DSP processor was a whole different ballgame from going back to school to learn enough about datapath microarchitectures to design one of the highly-parallel, heavily-pipelined, carefully-timed creations in VHDL or Verilog that would actually bring any reasonable percentage of those GMACs to life.

If you watch the whole thing in slow motion (using our high-frame-rate HD resolution digital camera with both Stratix III AND Virtex 5 devices processing the video in real time using all of their embedded DSP blocks simultaneously… Oh wait, that’s the marketing pitch), you’d see that the FPGA vendors got those outrageous GMACs numbers by simply multiplying the number of multipliers on their device by the maximum frequency at which they could be clocked.  Nothing in the real world will ever, ever, ever even come close to that performance with those devices. 

This small marketing miscue, however, has nothing to do with the problem.  It turns out that many DSP designers would be perfectly content with only 10-50X the performance they got with a traditional DSP (not the 1000X or so some of the GMACs numbers might lead one to believe).  The real issue was the designer expertise required to do the FPGA design and the fear factor faced by project teams in picking up that gauntlet – even in hopes of enormous performance gains.

Over in the ASIC world, however, it turns out that the EDA industry had been busy working on the same problem.  Instead of licensing a DSP core for your next system-on-chip, you could get much better performance (usually at lower cost and power) by designing a chunk of custom hardware for your specific algorithm.  Even in the lofty world of SoC ASIC design, they don’t like hand-crafting complex software algorithms into even more complex parallel hardware architectures, so some impressive tools were developed that could analyze an algorithm specified in a sequential, procedural language (like C, for example) and create a highly-optimized, parallelized, pipelined, ready-to-synthesize RTL microarchitecture that would put that algorithm into practice.

The head of the class in those tools is Mentor Graphics’s Catapult C Synthesis.  High-end ASIC design teams snapped up Catapult in droves, despite its hefty price tag, because they gained enormous productivity benefits from taking algorithms directly from C or C++ to hardware with performance that matched or even bettered what they got from months of hand-crafting RTL architectures.  In the ASIC world, however, projects are large, long and expensive.  Saving a few engineering-months of converting algorithm to architecture was still peanuts compared with the massive cost and schedule impact of creating and verifying a huge, costly, small-geometry SoC ASIC.  ASIC design teams prized Catapult as much for the flexibility it offered as for the productivity gains.  Need a smaller area solution with a longer latency?  Just press a few buttons.  Suddenly found out that you need to crunch data twice as fast?  Press a few more and your architecture is completely re-jiggered.  Need a different interface at the borders?  Catapult can handle it for you – retiming all the register-to-register logic as it goes along.

In the FPGA space, the benefits of a tool like Catapult C align perfectly with the value proposition of the FPGA itself – getting a complex design to market in absolute minimal time and retaining the flexibility inherent in the programmable logic platform.  Unfortunately, all the work to make Catapult give spectacular results in ASIC land fall apart when the tool starts stitching multipliers together out of LUTs while optimized hard-core Multiply Accumulate (MAC) units sit idle on the same chip.

This week, Altera and Mentor Graphics announced a collaboration that brings the benefits of high-powered algorithm-to-architecture technology like Catapult C to the FPGA community – in a way that takes advantage of the unique capabilities of both technologies.  The two companies have worked together to produce optimized Catapult libraries specifically for Altera’s FPGAs that allow the tool to understand and infer the high-performance resources already built into the FPGA.  Without these libraries, some of the most powerful design tools in the world (Catapult) and the some of the most powerful compute acceleration hardware in existence (Altera’s Stratix III FPGAs) just didn’t play nicely together.  You’d get results, but those results would fall far short of what was possible.

The two companies claim an average of 50-80% “DSP Fmax performance improvement” with the advent of the new libraries.  We’ll take exception to the use of Fmax as a DSP throughput metric, but the sentiment (and the hardware behind it) is what counts here.  Clearly, if you have a DSP algorithm that takes 50 multipliers, and you move those multipliers from LUT fabric to hard-wired, optimized multiply-accumulate devices sitting right on the chip, you gain significant performance, save a bundle of power, and free up all those expensive LUTs for other tasks.  For people targeting Altera devices with DSP algorithms, this is an enormous step forward in performance, cost, and power.

Using the Catapult C tool will not turn a software engineer into a hardware expert.  Catapult is not a fully-automatic algorithm-to-hardware converter.  It is, instead, a power tool that can assist someone with some hardware design knowledge in very quickly finding an optimal hardware implementation of an algorithm that meets any arbitrary tradeoff between area, power, and performance.  It also affords incredible flexibility in changing that architecture almost on-the-fly as design demands shift during (or after) the project. 

If you’re looking for a tool that’s typical of the “near-free” FPGA norm, be prepared for some serious sticker shock.  Catapult C is as expensive as a ground-breaking productivity tool should be.  Mentor claims that Catapult gives a 4-20X productivity advantage over hand-coding RTL for complex algorithms.  Unless your engineering time is very inexpensive, that kind of productivity boost can pay for even a very expensive tool on the first project.

The benefits of high-level design abstraction on the design process go far beyond productivity, however.  The combination of Catapult’s ability to fine-tune the hardware architecture to exactly balance area, performance, and power demands combined with the FPGAs time-to-market and flexibility advantages make something that, if you squint your eyes a little…  starts to approach the implementation flexibility of traditional DSP processors with orders of magnitude better performance and power consumption.

Leave a Reply

featured blogs
Jun 2, 2023
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Jun 2, 2023
I just heard something that really gave me pause for thought -- the fact that everyone experiences two forms of death (given a choice, I'd rather not experience even one)....
Jun 2, 2023
Explore the importance of big data analytics in the semiconductor manufacturing process, as chip designers pull insights from throughout the silicon lifecycle. The post Demanding Chip Complexity and Manufacturing Requirements Call for Data Analytics appeared first on New Hor...

featured video

Synopsys Solution for Comprehensive Low Power Verification

Sponsored by Synopsys

The growing complexity of power management in chips requires a holistic approach to UPF power-intent generation and low power verification. Learn how Synopsys addresses these requirements with a comprehensive solution for low-power verification.

Learn more about Synopsys’ Energy-Efficient SoCs Solutions

featured paper

EC Solver Tech Brief

Sponsored by Cadence Design Systems

The Cadence® Celsius™ EC Solver supports electronics system designers in managing the most challenging thermal/electronic cooling problems quickly and accurately. By utilizing a powerful computational engine and meshing technology, designers can model and analyze the fluid flow and heat transfer of even the most complex electronic system and ensure the electronic cooling system is reliable.

Click to read more

featured chalk talk

Machine Learning at the Edge: Applications and Challenges
Machine learning at the TinyEdge is the way of the future but how we incorporate machine learning into our designs can take a variety of different forms. In this episode of Chalk Talk, Amelia chats with Dan Kozin from Silicon Labs about how you can add machine learning to your next design. They investigate what machine learning workflows look like, what machine learning tools you can utilize and the key challenges you will encounter as a machine learning developer.
Oct 20, 2022
27,980 views