feature article
Subscribe Now

Sampling Some FPGA IP

Samplify Compresses Data and Design Cycles

FPGAs are a series of pipes. They’re not something you just dump something on. They’re not a big truck. If you don’t understand that, those pipes can be filled, and if they are filled, when you put your data in, it gets in line and it’s going to be delayed by anyone that puts into that pipe enormous amounts of material, enormous amounts of material.

Apologies to US Senator Ted Stevens (R-Alaska)

OK, maybe that’s just plain mean, but many people use FPGAs as big pipes.  You have an enormous amount of material coming in from, say, a high speed sensor, and you need to somehow manage the flow of that data into the rest of your system.  FPGAs are the undisputed connectivity masters in such situations.  Frequently, designers will plop an FPGA between the sensor and the rest of the system.  The FPGA may be doing some down-conversion or some high-speed parallel DSP processing near the point of origin.  It may be also taking advantage of high-bandwidth I/O to distribute the incoming data to multiple channels where it can be processed at a more leisurely and reasonable pace.

One of the most efficient things you can do with that data is to compress it as close to the source as possible.  Then, your “series of pipes” can be much smaller throughout the rest of your system, as you’ll be dealing with compressed data instead of raw data.  Wouldn’t it be nice if that compression could be done right there in your FPGA (since you’ll be using one anyway)?

Samplify Systems – a silicon valley startup, thinks it would be a great idea.  They have just introduced Samplify – an FPGA-based compression technology designed to sit at the incoming data end of your system and perform high-speed real-time compression, relieving the rest of your system from the burden of handling all that high-bandwidth information.  Samplify is sold as a generator that creates FPGA-friendly synthesizable HDL blocks that you can add to your existing FPGA design.  These blocks are reasonably small – in the range of 1800 slices on a Xilinx Virtex-4 and 3000 Logic Elements on Altera Stratix II.  They also operate at a relatively high frequency (over 200MHz on both those FPGA families), so the compression block isn’t likely to be the bottleneck of your design, and  you should have plenty of space left over in larger FPGAs for all that other stuff you expect from a decent series of pipes.

Samplify has a series of patents for their flexible algorithm generator.  Because applications have widely varied requirements, Samplify can do compression in a variety of modes including lossless, lossy, fixed-rate, and fixed quality.  The real-time compression ratios range from 2:1 to 8:1 depending on the options selected.  That level of compression can have a dramatic impact both on the bandwidth required to move data within your system and with the storage density of information being stored or buffered.  For wireless applications, this means that a great deal more information can be transmitted, yielding an effectively higher bandwidth connection.

Because the algorithm is implemented in FPGA hardware, the conversion rate is very high – over 50 mega-samples per second.  The company claims that the algorithm achieves approximately 90% of the theoretical best compression, meaning that even if you burned a bunch more logic, you wouldn’t get a substantial increase in compression with the corresponding decrease in bandwidth and data storage required.

The algorithm works by analyzing and compressing dynamic range, capitalizing on signal redundancy, and leveraging the “effective number of bits” inefficiency built into most data converters.  The system has an “Adaptation Engine” and a “Compression Engine” that are controlled by the “Samplify Controller”  (see fig 1.) 

Figure 1. (Image Courtesy of Samplify Systems)

In order to tune the parameters for your particular type of data, Samplify has a signal analysis tool called “Samplify for Windows.”  Samplify for Windows is a windows application that runs the Samplify algorithm on a Windows-based PC, allowing you to experiment with various parameter settings using your own sample data.  Once you’ve determined the settings that work best for your application, you can transfer those to the generator to create the FPGA-based version. 

Figure 2. (Image Courtesy of Samplify Systems)

“We often start with customers thinking that they require lossless compression,” says Bryan Hoyer, VP of Business Development for Samplify Systems.  “However, once they’re up and running, we convince them to try lossy compression and compare the actual results.  Often, they find that they can take advantage of significantly higher compression ratios without compromising the accuracy of their overall system.”

In a recent customer application sampling ultrasound data, the lossless compression algorithm was able to achieve a 2.87:1 compression ratio – reducing a 500MB/s data stream to 140 MB/sec. With fixed-rate compression, Samplify was able to achieve 6:1 compression with the same measurement results – yielding a reduction from 500MB/s to 85MB/s.  The incoming data was 8-bit resolution at 500 mega-samples per second.

The company claims that the algorithm will work at very high data rates – up to 40 Gsps.  While you won’t likely be using it for data for which well-established conversion standards already exist – (like MPEG, for example), it should prove a very attractive option for reducing bandwidth requirements, system cost, and design cycle time for a wide range of target applications where large amounts of incoming data must be processed or stored in real time, and for designers who don’t want to spend a lot of time becoming compression experts.

Samplify licenses based on a fixed-cost plus royalties model.  The Windows-based “Samplify for Windows” application can be purchased separately and can also be checked out as part of a 30 day evaluation.

Samplify is a prime example of the type of high-value IP product we may see flourishing as the use of FPGAs expands into new and broader markets and as the internal development resources of the large FPGA vendors can’t keep pace with the expanding wavefront of new blocks required by all those new applications.  Over time, we expect to see more small companies that make a good living providing proprietary IP blocks focused specifically on FPGA applications.

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability. 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

USB Power Delivery: Power for Portable (and Other) Products
Sponsored by Mouser Electronics and Bel
USB Type C power delivery was created to standardize medium and higher levels of power delivery but it also can support negotiations for multiple output voltage levels and is backward compatible with previous versions of USB. In this episode of Chalk Talk, Amelia Dalton and Bruce Rose from Bel/CUI Inc. explore the benefits of USB Type C power delivery, the specific communications protocol of USB Type C power delivery, and examine why USB Type C power supplies and connectors are the way of the future for consumer electronics.
Oct 2, 2023
35,911 views