FPGAs thrive on the desire for bandwidth. Every time another household upgrades its flat-screen TV to 1080p, the FPGA industry says “Thank You.” Every time the cable and satellite companies need to upgrade their service infrastructure to support a new standard, the FPGA world says “Muchas Gracias.” When a movie studio releases a blockbuster title created entirely in digital 3D, FPGA companies say “Domo Arigato Gozaimasu.”
Why do FPGAs and bandwidth go together?
FIrst, FPGAs are excellent deliverers of bandwidth. We engineers tend to create more bandwidth in three ways: 1) We run things at higher frequencies, 2) we run things in parallel, and 3) we compress.
FPGAs are not too bad at #1. They could be better, actually, but for the last several generations, FPGA companies have chosen to keep frequencies more or less the same while putting their energy into reducing the energy. This is because power turns out to be a bigger limitation on throughput than frequency. You can make up for frequency limits by stacking things in parallel, but when those things start to consume too much power and generate too much heat, you’re out of luck.
FPGAs are great at #2 – running in parallel. In fact, setting up parallel datapaths is the main reason to use FPGAs for accelerated computing. Replacing software algorithms with hardware datapaths and then implementing many in parallel not only generates much more data throughput than software; it also does it with much less total power. Lower power means you can pack more on a board, in a rack, and in a box.
FPGAs can also be excellent at #3 – compression. Depending on the compression algorithm, low-computation parts of the process can be done with embedded processors in the FPGA, and computationally-intensive, parallelizable parts can be implemented with optimized hardware datapaths.
A fourth advantage FPGAs have for delivering bandwidth is more subtle. Just about every time we make a large increase in bandwidth, it is accompanied by a new standard of some sort. When we have new standards, we generally need to have special hardware to implement the standard – rather than just more optimized versions of the old hardware. Usually, at the beginning, a new standard is in flux – with preliminary implementations that later give way to more robust versions as the standard becomes stable, gets ratified, and goes mainstream.
The time when the standard is in flux is FPGA heyday. ASSP companies can’t afford to create their designs until a standard is finalized, so the only viable option for the in-flux standard is usually FPGA implementation. If FPGA versions are cheap and effective enough – they may eliminate the need for ASSP implementations altogether.
At IBC 2010 last week, Xilinx made a number of announcements aimed at positioning themselves to win FPGA sockets in the bandwidth glut accompanying the current shifting standards in the broadcast industry. Events like the FIFA World Cup are lynchpins for industry-wide infrastructure upgrades. With the move from 720p HD video to 1080p, and from 1080p to 3D, broadcasters are needing a complete re-work of their infrastructure – with higher bandwidth and throughput at every juncture. Cameras, switchers, routers, encoders, monitors, projectors – all the way through the capture and delivery chain, increased performance requirements and evolving standards create a natural demand for FPGAs.
The problem is, not all designers of these systems are experts in FPGA design. Coming up to speed on FPGAs – enough to accelerate a sophisticated encoder algorithm in hardware, or to move vast amounts of compressed data through connections based on standards that are still in flux – is a daunting challenge all in itself. For FPGA companies, this lack of designer expertise in FPGAs is one of the biggest barriers to winning sockets for their devices.
Xilinx has decided that one way to solve this problem is to create domain-specific development kits that do a lot of the heavy lifting in the early part of the design process. By starting with a development board that has all the appropriate connections and interfaces for the target market – in this case, broadcast infrastructure – the average designer starts off in a much better place. Next by adding in domain-appropriate IP supporting the common standards and non-value-added design components, the development kit allows the designer to focus only on their particular area of expertise. Finally, by providing reference designs that are 50%-80% of the final system that the design team is working on, the FPGA company jump-starts the project and reduces the time-to-volume (when they can start making real money).
At IBC, Xilinx announced exactly that – a new development kit, called the Xilinx Spartan-6 FPGA Broadcast Connectivity Kit, and a Broadcast Processing Engine IP Core. The kit is aimed at implementation of interfaces like triple-rate Serial Digital Interface (SDI) and other interfaces, as well as applications that require real-time video processing. The kit, designed in cooperation with Tokyo Electron Devices, provides triple-rate SDI, High-Definition Media Interface (HDMI), DisplayPort, DVI, and V-by-One. Easy availability and implementation of all these standards in an FPGA platform makes bridging from anything-to-anything a breeze. With the advent of 3D, the doubling of bandwidth requirements and the accompanying increase in the requirement for the number of SDI ports also plays into the wheelhouse of FPGAs.
The Broadcast Processing Engine is billed as an IP core, but it works more like a reference design. Designers can use the core to process video on a single FPGA, taking advantage of a standardized input and output (Xilinx Streaming Video Interface or XSVI) to add their own IP or mix-and-match with Xilinx IP. The Broadcast Processing Engine supports standards including 3D TV and 4K x 2K digital cinema. It will support up to 4k x 4k resolution in the video scaler at 12-bit color depth, which is enough for even the most demanding broadcast and cinematic systems.
At the same time, Xilinx is making an announcement with Coreworks on the availability of a range of audio codec IP cores for compressing multi-channel audio. By handling the codecs with FPGAs, system designers can leave their systems open for future expansion and evolution in standards. FPGAs also are ideal for the massive DSP processing that is required for many of these standards, delivering more performance for less power than conventional DSP processors. The new IP cores, designed by Coreworks, support Dolby Digital, AAC+, MPEG-1 Layer II, and Dolby-E.
A third Xilinx announcement at the conference was with partner VSofts, and it was a demonstration of an H.264/AVC-I IP core implemented on Xilinx FPGAs. Rapid encoding – particularly of lynchpin standards like H.264 – is a key element in tracking increasing bandwidth requirements like those presented by 3D TV.
More often, we expect to see FPGA companies following this trend and bringing the solution to the customer rather than waiting for the customer to find and create their own solution based on FPGAs. By speaking the language of the design domain, providing development kits with the right inputs and outputs, offering a range of industry-specific IP blocks in conjunction with partners, and providing a large head-start in the form of reference designs, FPGA companies can have a huge impact on both the adoption rate of FPGAs and the time-to-volume for those socket wins to start paying off.