feature article
Subscribe Now

Samplify Finds a Sweet Spot

ADC and Compression Complement FPGAs

They always want more data.  

They want more data, faster, for less power, and at a lower cost.  

Their appetite seems insatiable.

…which is pretty lucky for us, as electronics engineers.  Otherwise, we’d have worked ourselves out of jobs years ago.  We thrive on the continual demand for bigger, faster, cheaper, cooler pipes.  The main weapon in our tool chest?  Moore’s Law – an exponential power tool booster rocket that allows us to constantly do more to more with less for less.  Got a problem nobody can solve?  Put it in to bake for a couple of years with Moore’s Law, and you’re likely to come up with a solution.  

Recently, however, our favorite power tool is getting a little long in the tooth.  When we want to pump more data from point A to point B, we need bigger, faster transceivers.  Those transceivers have scary hairy analog parts that don’t walk the garden path of Moore’s Law quite so easily.  

FPGA vendors have gone to bat for us – bringing us faster, cooler, larger devices with each new process node, and with faster transceivers to boot.  However, this can’t go on indefinitely.

A few years ago, Samplify introduced some IP that could help us gain ground on the problem.  They came to market with some compression/decompression IP that could dramatically reduce the amount of data we were pushing through our pipes.  Their algorithms allow selection of lossy or lossless with corresponding gains in compression.  They weren’t domain-specific, so we could compress whatever kind of data we were dealing with, without having to do a bunch of re-engineering of our problem.

Now, the company has transformed themselves into a fabless semiconductor supplier.  Instead of just selling us IP for our designs, they took their compression IP, optimized it for certain applications, mated it with high-performance analog-to-digital conversion, and designed it into some cheap, convenient, silicon that we can park right next to our FPGAs – giving us a big break on the amount of data our FPGA design has to consume.  For the company, that means they have a business model that is a lot easier to manage than the quirky, unpredictable, bizarre world of IP licensing.  For us, it means that a computationally-intensive, power-hungry, LUT-eating piece of our design can be moved off-chip.  From there, it can let us do a lot of the heavy lifting that would have required a high-end FPGA, and get away with a low-cost FPGA instead.

As an example, Samplify just announced a development kit for ultrasound.  It uses the Samplify SAM1600 ADC along with Altera Cyclone FPGAs to deliver a reference design for a 64-channel ultrasound analog-front-end.  Both the Samplify device and the Altera device are low-power, allowing the resulting design to be used in sophisticated handheld and portable ultrasound products, or in full-blown consoles.  The kit comes with a 64-channel reference design offering both continuous wave and pulse-Doppler modes.  You just need to add your secret sauce and go.

Using the Cyclone FPGAs for the receive and transmit beamforming along with the Samplify device for ADC and compression, the whole signal path is populated with low-cost, low-power devices while providing a huge amount of customization capability.  This kind of marriage of high-performance ASSPs with FPGAs will likely show up more often as FPGA vendors work to get more near-turnkey design kits on the market – expanding their horizons beyond the usual FPGA-savvy design community.

Heading into even more FPGA-rich territory, Samplify has developed what it calls Prism IQ compression technology to deploy in the 4G wireless space.  Prism IQ delivers a 2:1 compression on Common Public Radio Interface (CPRI), reducing the required fiber bandwith for connecting to the remote radio sites.  The bandwidth requirement reduction maps directly to a significant savings in infrastructure cost for operators.  By creating this application-specific version of their compression, Samplify is able to give up to a 4db improvement in error-vector magnitude (EVM) over their generic algorithm.   In addition to improving FPGA-based solutions, some of the Samplify offerings will take out the FPGA entirely.  The company has partnered with IDT to create signal-chain systems to get the large volume of data from the top of a radio tower to the baseband processor at the bottom.  

Cases like Samplify highlight three important trends in the FPGA space.  First, it is very difficult to make a sustainable business solely by licensing IP for FPGA use.  The market, the mindset of design teams, the proliferation of near-free IP, and the ubiquitous NIH (not-invented-here) syndrome make the IP business very difficult.  Somehow, embodying your IP into a chunk of silicon that you can sell makes the problems all mysteriously vanish.  We hope there will be a day when FPGA IP is a booming business, and when designers can grab a key IP block for their designs as easily as buying an app for their smartphones.  

The second trend is pre-packaged solutions where the FPGA is a key part of the platform, but other custom-designed silicon is required to get to the finish line.  For years, FPGA vendors have struggled against each other for precious points of market share in key application areas that were already FPGA savvy.  At the same time, they’ve worked to open up new markets for FPGA technology and have been inhibited by the daunting learning curve faced by new adopters of FPGA technology.  The solution is to package FPGAs into comprehensive, domain-specific development kits – along with other IP and silicon, to get development teams to the point of almost-ready-to-use designs with the FPGA already designed in.  From that point, embracing the FPGA is a much more palatable problem.  

The third trend we can see here is that more than Moore’s Law will be required to keep up with application demands.  As progress in process technology flattens out, we’ll see more cases where algorithmic and software solutions are required to reach bandwidth, cost, and power goals – rather than just waiting for the next process node to come along and make everything better.  There will be more opportunity and more motivation for creative engineering work at (and near) the application level.  In the past, we could either work hard to come up with a novel approach to solving our problem, or wait 18 months until the next process node made novel solutions unnecessary.  

When the free lunch goes away, we’ll have to start cooking for ourselves. 

Maybe it will taste better.  

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

How NV5, NVIDIA, and Cadence Collaboration Optimizes Data Center Efficiency, Performance, and Reliability

Sponsored by Cadence Design Systems

Deploying data centers with AI high-density workloads and ensuring they are capable for anticipated power trends requires insight. Creating a digital twin using the Cadence Reality Digital Twin Platform helped plan the deployment of current workloads and future-proof the investment. Learn about the collaboration between NV5, NVIDIA, and Cadence to optimize data center efficiency, performance, and reliability.Ā 

Click here for more information about Cadence Data Center Solutions

featured chalk talk

PolarFireĀ® SoC FPGAs: Integrate LinuxĀ® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
23,058 views