feature article
Subscribe Now

Intel Acquires Omnitek

FPGA AI Race Heats Up

Following on the recent announcement of the new Agilex FPGA family, Intel announced they are acquiring Omnitek – a developer of video and vision acceleration IP for FPGAs. As the market for FPGA-powered acceleration heats up, one of the killer applications is video and machine vision. FPGAs are extremely well suited to video analytics, as the combination of acceleration capabilities and flexible handling of high bit-rate data streams, plus the adaptability to various application-specific parameters is a near ideal fit.

Omnitek is a UK-based company with about 40 employees that has been in business for twenty years. They focus on IP and services for the design of video and vision systems, based on FPGAs and SoCs. They specialize in high-performance video/vision and AI/machine-learning for markets including medical, broadcast, professional AV, automotive, government homeland security, aerospace/defense, industrial/scientific, consumer, and test & measurement.  The company also manufactures video test & measurement equipment to complement their IP and services offerings.

Omnitek is being rolled into the programmable systems group (PSG, which is the former Altera team) at Intel. The combination makes sense, as video and vision acceleration with machine learning is clearly one of the key battlegrounds for Intel PSG in their ongoing rivalry with Xilinx. Xilinx has made strides in repositioning itself to go head-to-head with Intel in the data center acceleration market with their “ACAP” FPGA offering, their Alveo accelerator cards, and a rich portfolio of IP. Xilinx also bolstered their offering with the acquisition of DeePhi Tech, a  Beijing-based start-up with capabilities in machine learning built around Xilinx devices.

For Intel, the Omnitek acquisition is a strategic move, strengthening the company’s own machine-learning portfolio. while effectively blocking Xilinx from taking full advantage of Omnitek’s offering. Omnitek has developed over 220 FPGA IP cores, as well as accompanying software for video-related applications. In the battle to capture the rapidly growing intelligent video acceleration market, FPGA companies will need to engage customers who do not have experienced teams of FPGA designers, so ready-baked optimized IP, reference designs, software, and services will make the advantages of FPGAs available to a much larger set of customers and applications.

At almost the same time, Omnitek announced the availability a new IP offering for convolutional neural networks (CNNs), “delivering world-leading performance per watt at full FP32 accuracy” with the Intel Arria 10 GX FPGA. The Omnitek Deep Learning Processing Unit (DPU) achieves 135 GOPS/W at 32-bit floating point accuracy when running the VGG-16 CNN in an Arria 10 GX 1150. The design “employs a novel mathematical framework combining low-precision fixed point maths with floating point maths to achieve this very high compute density with zero loss of accuracy.”

The DPU is scalable across both Intel’s Arria 10 GX and Stratix 10 GX devices, and it can be tuned for either low cost or high performance for either embedded or data center use. Omnitek says the DPU is “fully software programmable in C/C++ or Python using standard frameworks such as TensorFlow, enabling it to be configured for a wide range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 as well as custom models.”

With this acquisition, Intel gets a strong boost in IP, but, more importantly a big influx of talent – engineers already expert in creating applications based on Intel FPGAs. Intel’s strategy in FPGA appears to be to deploy as many ready-made applications as possible into high-growth markets, and to have as many of those solutions as possible already bundled into Intel hardware solutions. This takes advantage of the company’s incredible breadth of technology as well as capitalizing on their dominance in the data center. This approach creates a formidable barrier to companies such as NVidia and Xilinx, who aim to make a living selling into what amounts to an Intel-owned ecosystem. The fortification of Intel’s FPGA offering – in hardware, IP, and software – raises the bar for what those competitors will have to do in order to get a foothold in the acceleration part of those systems.

Looking at just the AI acceleration market in the data center, NVidia has created a decent business selling GPUs as acceleration engines for both training and inference use in data centers. The problem with GPU-based solutions is that they do well in the acceleration department but don’t provide much benefit in the performance-per-watt department. FPGA-based accelerators are enormously more power efficient, but they have traditionally had a steep development curve to get them programmed optimally.

It appears that Intel is taking a multi-pronged approach to squeeze NVidia out of this space. First, Intel has significantly improved the performance of their Xeon processors for AI inferencing tasks. With something like a 30x inferencing upgrade announced in their recent release, the baseline for applications that would even need acceleration is raised significantly. After all, if you’ve already got Xeon-based servers sitting there and they can handle your particular AI task, why go for acceleration at all?

Then, for those who still need acceleration, there are many competitors in the market, including NVidia with their GPUs, companies such as Xilinx and Achronix with third-party FPGA solutions, and Intel, themselves, with their own range of accelerators including FPGAs from PSG. In many situations, Intel and their OEMs are packaging FPGAs into servers, cards, and even the same packages as Xeon processors. With Intel’s FPGA hardware already sitting in the system, the argument for bringing in third-party accelerators is made even more difficult.

Regardless of the competitive landscape, several things are clear. First, FPGAs will play a much larger role than ever in the enormous opportunities emerging in and out of the data center with the explosion of data generated by the latest generations of cameras, sensors, and assorted IoT devices. The challenge of processing and moving all that data is a perfect match for FPGAs, but FPGA companies must be creative in their approach to reducing the learning curve for engineering teams wanting to take advantage of their capabilities. By pre-engineering IP, reference designs, software, and entire applications – in addition to radically upgrading development tool suites – FPGA suppliers can enormously expand their customer base and the number of systems in which they are deployed. We are clearly at the doorstep of the biggest opportunity for FPGA market growth in history, but that growth will not occur without some serious innovation in the ecosystem. It will be interesting to watch.

Leave a Reply

featured blogs
Nov 27, 2023
Most design teams use the schematic-driven connectivity-aware environment of Virtuoso Layout XL. However, due to the reuse of legacy designs, third-party tools, and the flexibility of the Virtuoso platform, a design can lose binding and connectivity. Despite the layout being ...
Nov 27, 2023
Qualcomm Technologies' SVP, Durga Malladi, talks about the current benefits, challenges, use cases and regulations surrounding artificial intelligence and how AI will evolve in the near future....
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

Package Evolution for MOSFETs and Diodes
Sponsored by Mouser Electronics and Vishay
A limiting factor for both MOSFETs and diodes is power dissipation per unit area and your choice of packaging can make a big difference in power dissipation. In this episode of Chalk Talk, Amelia Dalton and Brian Zachrel from Vishay investigate how package evolution has led to new advancements in diodes and MOSFETs including minimizing package resistance, increasing power density, and more! They also explore the benefits of using Vishay’s small and efficient PowerPAK® and eSMP® packages and the migration path you will need to keep in mind when using these solutions in your next design.
Jul 10, 2023
16,726 views