feature article
Subscribe Now

Samplify Finds a Sweet Spot

ADC and Compression Complement FPGAs

They always want more data.  

They want more data, faster, for less power, and at a lower cost.  

Their appetite seems insatiable.

…which is pretty lucky for us, as electronics engineers.  Otherwise, we’d have worked ourselves out of jobs years ago.  We thrive on the continual demand for bigger, faster, cheaper, cooler pipes.  The main weapon in our tool chest?  Moore’s Law – an exponential power tool booster rocket that allows us to constantly do more to more with less for less.  Got a problem nobody can solve?  Put it in to bake for a couple of years with Moore’s Law, and you’re likely to come up with a solution.  

Recently, however, our favorite power tool is getting a little long in the tooth.  When we want to pump more data from point A to point B, we need bigger, faster transceivers.  Those transceivers have scary hairy analog parts that don’t walk the garden path of Moore’s Law quite so easily.  

FPGA vendors have gone to bat for us – bringing us faster, cooler, larger devices with each new process node, and with faster transceivers to boot.  However, this can’t go on indefinitely.

A few years ago, Samplify introduced some IP that could help us gain ground on the problem.  They came to market with some compression/decompression IP that could dramatically reduce the amount of data we were pushing through our pipes.  Their algorithms allow selection of lossy or lossless with corresponding gains in compression.  They weren’t domain-specific, so we could compress whatever kind of data we were dealing with, without having to do a bunch of re-engineering of our problem.

Now, the company has transformed themselves into a fabless semiconductor supplier.  Instead of just selling us IP for our designs, they took their compression IP, optimized it for certain applications, mated it with high-performance analog-to-digital conversion, and designed it into some cheap, convenient, silicon that we can park right next to our FPGAs – giving us a big break on the amount of data our FPGA design has to consume.  For the company, that means they have a business model that is a lot easier to manage than the quirky, unpredictable, bizarre world of IP licensing.  For us, it means that a computationally-intensive, power-hungry, LUT-eating piece of our design can be moved off-chip.  From there, it can let us do a lot of the heavy lifting that would have required a high-end FPGA, and get away with a low-cost FPGA instead.

As an example, Samplify just announced a development kit for ultrasound.  It uses the Samplify SAM1600 ADC along with Altera Cyclone FPGAs to deliver a reference design for a 64-channel ultrasound analog-front-end.  Both the Samplify device and the Altera device are low-power, allowing the resulting design to be used in sophisticated handheld and portable ultrasound products, or in full-blown consoles.  The kit comes with a 64-channel reference design offering both continuous wave and pulse-Doppler modes.  You just need to add your secret sauce and go.

Using the Cyclone FPGAs for the receive and transmit beamforming along with the Samplify device for ADC and compression, the whole signal path is populated with low-cost, low-power devices while providing a huge amount of customization capability.  This kind of marriage of high-performance ASSPs with FPGAs will likely show up more often as FPGA vendors work to get more near-turnkey design kits on the market – expanding their horizons beyond the usual FPGA-savvy design community.

Heading into even more FPGA-rich territory, Samplify has developed what it calls Prism IQ compression technology to deploy in the 4G wireless space.  Prism IQ delivers a 2:1 compression on Common Public Radio Interface (CPRI), reducing the required fiber bandwith for connecting to the remote radio sites.  The bandwidth requirement reduction maps directly to a significant savings in infrastructure cost for operators.  By creating this application-specific version of their compression, Samplify is able to give up to a 4db improvement in error-vector magnitude (EVM) over their generic algorithm.   In addition to improving FPGA-based solutions, some of the Samplify offerings will take out the FPGA entirely.  The company has partnered with IDT to create signal-chain systems to get the large volume of data from the top of a radio tower to the baseband processor at the bottom.  

Cases like Samplify highlight three important trends in the FPGA space.  First, it is very difficult to make a sustainable business solely by licensing IP for FPGA use.  The market, the mindset of design teams, the proliferation of near-free IP, and the ubiquitous NIH (not-invented-here) syndrome make the IP business very difficult.  Somehow, embodying your IP into a chunk of silicon that you can sell makes the problems all mysteriously vanish.  We hope there will be a day when FPGA IP is a booming business, and when designers can grab a key IP block for their designs as easily as buying an app for their smartphones.  

The second trend is pre-packaged solutions where the FPGA is a key part of the platform, but other custom-designed silicon is required to get to the finish line.  For years, FPGA vendors have struggled against each other for precious points of market share in key application areas that were already FPGA savvy.  At the same time, they’ve worked to open up new markets for FPGA technology and have been inhibited by the daunting learning curve faced by new adopters of FPGA technology.  The solution is to package FPGAs into comprehensive, domain-specific development kits – along with other IP and silicon, to get development teams to the point of almost-ready-to-use designs with the FPGA already designed in.  From that point, embracing the FPGA is a much more palatable problem.  

The third trend we can see here is that more than Moore’s Law will be required to keep up with application demands.  As progress in process technology flattens out, we’ll see more cases where algorithmic and software solutions are required to reach bandwidth, cost, and power goals – rather than just waiting for the next process node to come along and make everything better.  There will be more opportunity and more motivation for creative engineering work at (and near) the application level.  In the past, we could either work hard to come up with a novel approach to solving our problem, or wait 18 months until the next process node made novel solutions unnecessary.  

When the free lunch goes away, we’ll have to start cooking for ourselves. 

Maybe it will taste better.  

Leave a Reply

featured blogs
Sep 22, 2021
'μWaveRiders' 是ä¸ç³»åˆ—æ—¨å¨æŽ¢è®¨ Cadence AWR RF 产品的博客,按æˆæ›´æ–°ï¼Œå…¶å†…容涵盖 Cadence AWR Design Environment æ新的核心功能,专题视频ï¼...
Sep 22, 2021
3753 Cruithne is a Q-type, Aten asteroid in orbit around the Sun in 1:1 orbital resonance with the Earth, thereby making it a co-orbital object....
Sep 21, 2021
Learn how our high-performance FPGA prototyping tools enable RTL debug for chip validation teams, eliminating simulation/emulation during hardware debugging. The post High Debug Productivity Is the FPGA Prototyping Game Changer: Part 1 appeared first on From Silicon To Softw...
Aug 5, 2021
Megh Computing's Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization   Because Megh's ...

featured video

ARC® Processor Virtual Summit 2021

Sponsored by Synopsys

Designing an embedded SoC? Attend the ARC Processor Virtual Summit on Sept 21-22 to get in-depth information from industry leaders on the latest ARC processor IP and related hardware and software technologies that enable you to achieve differentiation in your chip or system design.

Click to read more

featured paper

Configurable Input/Output Modes for PLC Systems Using the MAX22000 and MAX14914A

Sponsored by Maxim Integrated (now part of Analog Devices)

This application note features input/ components on the MAX22000 that may be used in analog input and output configuration. Circuit configurations are shown for common industrial Analog modes.

Click to read more

featured chalk talk

Accelerating Physical Verification Productivity Part Two

Sponsored by Synopsys

Physical verification of IC designs at today’s advanced process nodes requires an immense amount of processing power. But, getting your design and verification tools to take full advantage of the compute resources available can be a challenge. In this episode of Chalk Talk, Amelia Dalton chats with Manoz Palaparthi of Synopsys about dramatically improving the performance of your physical verification process. 

Click here for more information about Physical Verification using IC Validator