feature article
Subscribe Now

Acceleration as a Service

Accelize Attacks Barriers

Let’s say you have a huge batch of data that needs to be crunched. Maybe it needs the special help of some new neural network algorithm running on a massive server cluster, accelerated with a pool of FPGA accelerators. We’ll call you “Lisa.” But, Lisa, you don’t have a giant server farm with FPGA-based accelerators. You also don’t have the specialized software required to crunch your data, let alone the super-specialized version that can take advantage of FPGA-based acceleration. You’re just a team of specialized experts and several million dollars worth of exotic hardware away from solving your problem. Poor Lisa.

Now let’s say you’re a provider of cloud-based computing services. We’ll call you “AWS” (but you could be any one of the growing number of companies leasing cloud computing capabilities). You could solve Lisa’s hardware problem. You’ve got acres of the latest servers, and shiny-new racks of FPGA-based accelerators with the latest, snazziest, fastest FPGAs, all lined up and liquid cooled – just waiting to eat Lisa’s data for breakfast, at a price far more realistic than what Lisa would pay for much less capable hardware of her own. Lisa and thousands like her could really help generate revenue on that giant server/FPGA investment you’ve made. Unfortunately, there’s that super-specialized software problem. You don’t have it, so Lisa can’t lease your servers and accelerators. Poor AWS.

Next, let’s assume that you’re a data scientist who is expert at developing neural network applications. We’ll call you “Norm.” You know how to create an application to solve Lisa’s problem. You know how to select the best training data and how to optimize and tune the coefficients so that Lisa’s data would flow beautifully through the machine, giving exactly the results she needs. You could probably even make a business with your application as software-as-a-service running on AWS servers. Unfortunately, you don’t know anything about FPGA design, and especially about designing FPGA-based accelerators. Your software-only version of the application would weigh in a couple orders of magnitude too slow to solve the problem. Poor Norm. 

Moving right along, let’s say you’re an expert FPGA designer. We’ll call you “Franz.” You’re a black-belt at RTL, high-level design, and architecture optimization. You could design the accelerators Norm needs in your sleep. You could maybe even make a business selling those accelerators to applications experts like Norm. Or even better – offering them “hardware as a service” for anybody to use on AWS’s servers. Unfortunately, you need a bunch of IP blocks to get the job done, or else you’ll have to do the whole application from scratch, which wouldn’t be practical. The IP blocks you need aren’t licensed such that you could re-use them and license them on a pay-per-use basis in your accelerators, and you can’t plunk down 6-figure IP licensing fees just for this development project. Poor Franz. 

Finally, let’s assume you are “Inez” with tons of experience at low-level RTL design of highly-optimized IP blocks. You’ve tried to make a business with your IP, but, in the past, you developed FPGA IP intending to sell it, only to have the FPGA companies come in and offer similar IP for free. You’ve been somewhat successful licensing your IP to companies using it for major hardware development projects in ASICs, ASSPs, and FPGAs, but your licensing scheme has no provisions for people like Franz to use your IP to develop an accelerator, which would then be licensed to Norm, who would then license the whole thing to Lisa on a software-as-a-service basis running on AWS’s FPGA cloud. Your IP would have to somehow allow itself to be used on all of those random FPGAs on AWS’s servers, which doesn’t jive at all with your IP protection schema. Poor Inez.

Accelize is French company – a spinoff of a twenty-year-old IP development company called “PLDA.” Accelize has some FPGA design tools to sell, and those tools are specifically aimed at application developers like Franz (and maybe even Norm) who want to create specialized accelerators to run on FPGAs (or clusters of FPGAs). Their tools help to stitch together the necessary IP and to take a high level approach that simplifies and speeds up the development of accelerators. Unfortunately, the whole ecosystem for acceleration-as-a-service doesn’t exist. The entire stack above from Inez to Franz to Norm to AWS to Lisa would all be thrilled if there were a secure, standard way to make this happen – and, of course, Accelize would be thrilled about that too, because there would be a solid market for their tools in this scenario. But this ecosystem does not yet exist. Poor Accelize. 

But Accelize is trying to do something about it. The company has rolled out an entire ecosystem roadmap for acceleration-as-a-service, and they are working to develop and deploy the necessary components and establishing the required agreements to make it all happen. The way Accelize sees it, there are three major challenges: the FPGA programming problem, the IP procurement problem, and the business, economic, and legal model incompatibility. They are attacking each one with a comprehensive and innovative solution that should please everyone from “Inez” to “Lisa.”

The way Accelize sees it, “IP providers” (companies like their old buddies PLDA) are creating high-value IP such as video transcoders, scalers, HDR modules, combinational neural network processors, and so forth. Those would be very useful to “accelerator developers” who want to do things like image classification in video streams. But, since that IP is currently licensed for use in electronic system designs, it’s not practical (or legal) to incorporate it into FPGA accelerators running in cloud-computing situations.

What Accelize proposes is a three-pronged solution. First, the “QuickAlliance” is an ecosystem of accelerator developers and IP providers who agree to participate in the acceleration-as-a-service schema. Second “QuickPlay” is an FPGA development framework/tool set that facilitates the creation and deployment of FPGA accelerators based on IP from participating suppliers. Finally “QuickStore” is an application marketplace that allows end users to license applications developed in this ecosystem on cloud-service-provider servers. Doing all this requires the participation of IP providers, accelerator developers, and cloud compute providers.  

Accelize has come up with a secure pay-per-use licensing scheme that extends from the application layer all the way back through the accelerators and IP that make it happen. So, when Lisa runs her data through her application on AWS, the meter runs for everyone in the stack, and each stakeholder gets compensated according to their particular licensing terms. It’s a clever and innovative solution that could create an entire profitable universe for IP developers, accelerator developers, application experts, and cloud computing providers. Of course, Accelize would be in there taking a cut somewhere (kinda like a credit card company, perhaps?) and their design tools would be integral in making the appropriate licensing and use connections in the actual application.

Accelize has also worked to make this system platform independent, so an application deployed on one cloud provider who happens to use Xilinx FPGAs should work on a different cloud provider whose accelerators are Intel FPGAs, for example. That portability and platform independence should enable application developers to sell their wares and services more broadly without having to do complex ports to different compute architectures.

Accelize’s vision is certainly ambitious, and there are many obstacles to overcome before it is up and working on a broad scale. But the micro-licensing scheme and the idea of acceleration as a service should be extremely attractive in the current climate – for every stakeholder in the chain. It will be interesting to watch what happens.

Leave a Reply

featured blogs
Dec 5, 2023
Introduction PCIe (Peripheral Component Interconnect Express) is a high-speed serial interconnect that is widely used in consumer and server applications. Over generations, PCIe has undergone diversified changes, spread across transaction, data link and physical layers. The l...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

Power and Performance Analysis of FIR Filters and FFTs on Intel Agilex® 7 FPGAs

Sponsored by Intel

Learn about the Future of Intel Programmable Solutions Group at intel.com/leap. The power and performance efficiency of digital signal processing (DSP) workloads play a significant role in the evolution of modern-day technology. Compare benchmarks of finite impulse response (FIR) filters and fast Fourier transform (FFT) designs on Intel Agilex® 7 FPGAs to publicly available results from AMD’s Versal* FPGAs and artificial intelligence engines.

Read more

featured chalk talk

Medical Grade Power
Sponsored by Mouser Electronics and RECOM
In this episode of Chalk Talk, Amelia Dalton and Louis Bouche from RECOM explore the various design requirements for medical grade power supplies. They also examine the role that isolation and leakage current play in this arena and the solutions that RECOM offers in terms of medical grade power supplies.
Nov 9, 2023
3,098 views