FPGAs end up in the thick of the datapath in most applications today. Whether the device is bridging between incompatible protocols, blasting bits over a backplane, or performing massively parallel signal processing — in each of these cases, a massive data stream comes into our FPGA, gets munged around by all our wonderful FPGA fabric and internal blocks, and then the result is streamed out of our FPGA on its way to some final destination. While much of our design focus is what goes on “between the I/Os,” the limiting factor is often the bandwidth of those I/O channels themselves. Witness the huge investment FPGA companies have made in high-speed serial transceivers and the premium prices they charge for FPGAs with high I/O bandwidth capability.
In most cases, the data coming into our FPGA has some analog origin. Somewhere, an analog-to-digital converter (ADC) took those analog signals and converted them to digital data streams. The fidelity of our project is often dependent on how much accuracy can be preserved while keeping the data stream size manageable.
Samplify Systems has attacked this problem from several angles. First, they have a proprietary compression algorithm with variable compression ratios and options for lossless or lossy compression. Unlike application-specific compression algorithms such as MPEG, Samplify’s compression is generic, and it is low-latency when implemented in hardware. Compression IP is interesting, but Samplify took their program another step. They designed a chip with high-performance ADCs coupled directly to a hardware implementation of their compression algorithm. Hook your analog data up to the front end, dial in your compression ratio, and get compressed digital data out the back ready to go straight… where?
To your FPGA, that’s where. Now you can get by with much lower FPGA I/O bandwidth (like LVDS on a low-cost FPGA instead of SerDes on a high-end one perhaps?) as long as you can decompress the data on your FPGA so that you can do your processing. Here’s where the next phase of the Samplify system kicks in – you can license pre-tested IP blocks for your FPGA that allow you to decompress that incoming data so you can spread it out across your FPGA fabric and go about your normal business.
But – what if your business then takes you elsewhere, like off of your FPGA to an embedded processor for lower-speed processing? In many cases, it would be nice to compress that data as well. Samplify also licenses the compression IP for FPGA and software implementations for windows, matlab, and GPU implementation. This end-to-end approach to the data path allows you to get away with the least amount of the most expensive hardware – the FPGA I/Os and the traces and connectors that hook those to the rest of your system.
This is a great plan, but we need more specifics. Samplify’s SAM1600 family of ADCs are 16-channel, 12-bit, 65 MSPS ADCs with integrated data compression. The company claims power consumption in a full 16-channel, 12-bit application will come out to 44mW per channel. The device can be used as a stand-alone ADC without compression, but what fun is that? By cranking the compression up, we could reduce the 16 LVDS pairs on the output to just 4, with a corresponding 75% reduction in power consumption for output drivers. Moving on to the FPGA, we can reduce the number of inputs by the same factor – saving power and pins on the FPGA as well.
The 1600 family comes in a 12mmX12mm, 196-pin BGA and is available in three flavors – each flavor with a different combination of 8 or 16 channels dual 45/65 MSPS sample rates, one with and one without proprietary “Prism” compression. All of the devices also employ a port-compression technology that maintains full-rate 800Mbps serialized LVDS I/O bandwidth without using compression.
Samplify has pursued an aggressive patent portfolio on this device including claims of combining ADC and compression technologies. Putting one of these in front of your FPGA could easily pay for itself in cost savings on the FPGA alone, and the overall system power reduction and reduced board and connector complexity translate into more savings. The proprietary nature of the compression algorithm might give pause, but for now, Samplify is making a robust portfolio of compression and decompression IP available so you shouldn’t be stuck with an open-ended connection in your system that you can’t match with corresponding decompression later on.
The FPGA IP handles compression and decompression in real time with adjustable compression ratio from 2:1 to 8:1. There are three modes of operation – lossless with bit-true reconstruction of the original signal, a constant output bit-rate mode for fixed-capacity links, and a mode that gives direct control over compression settings to optimize the SNR for your particular application. The company delivers the IP as encrypted netlists for Xilinx and Altera FPGAs, with an annual-plus-royalty license arrangement.
The Windows tool allows you to analyze and experiment with the compression modes on real data. This would facilitate what-if experiments before you’ve selected a specific FPGA platform, and in a friendlier environment than embedded hardware. There is also a version for Windows Embedded that allows the compression/decompression to be continued into embedded software.
The company says it is targeting applications like ultrasound front-ends, 4G base stations, test equipment, RADAR, and SONAR. In many of these areas – particularly applications like medical imaging, the connection from the analog front-end to the digital processing core encompasses some of the most expensive components in the system. If a cheap chip can bring down power, cost and complexity, design-ins should be plentiful.