feature article
Subscribe Now

Driving Down Intelligent Vision

Microchip Solution Lowers the Bar

For FPGAs, machine vision represents perhaps the ultimate killer app. First, the video landscape is varied enough that there is no “one size fits all” way to manage the data coming into and out of the compute engine. With so many resolutions, frame rates, encoding standards, and so forth – the permutations are dizzying. So dizzying, in fact, that FPGAs have become a go-to solution for design teams wanting to make any kind of video solution robust and agnostic when it comes to video in and out.

So, since you’ve already got an FPGA or two in there anyway…

With the advent of AI-based vision algorithms, we have a need for very high performance and low power inferencing of convolutional neural network models. In vision, we can often quantize models down to very narrow bit-widths and still retain adequate accuracy. This is the domain where FPGA fabric shines. Quantized, optimized models can be realized in FPGAs at remarkable performance levels, using only a tiny fraction of the power that would be required for software implementations using conventional processors. The vast arrays of multipliers/DSP resources on many FPGAs bring incredible performance to the table  – both throughput and latency – with a very small power tab.

The flexibility of FPGA devices means that an infinite variety of video formats, video filtering and processing algorithms, and CNN models can be implemented on the same device – a near slam-dunk if you are creating edge-based systems that need to know what they’re looking at. The combo of problems for edge embedded vision plays to the sweet spot of FPGAs in just about every possible way.

But, of course, there are a couple of catches.

First, the problem has traditionally required a pretty high-end FPGA. That means power and cost are not in line with everyone’s BOM budgets. As you get closer to the bleeding edge on FPGA technology, the margins get healthier and the price per part becomes prohibitive for many higher-volume applications. When designing something like a security camera for broad distribution, high-end FPGAs are extremely prohibitive.

Second, FPGAs are just plain hard to use. Despite the best efforts of FPGA companies over the past decades, doing a design from scratch for an FPGA requires a competent team of FPGA experts and a few months of design time. Many companies moving into the embedded vision space simply do not have easy access to that kind of engineering expertise, and that makes designing-in an FPGA a risky and costly prospect for them.

Now, however, the bar is being lowered. Microchip this week announced their “Smart Embedded Vision Initiative,” which aims to bring robust, low-cost, low-power, high-performance vision to the table that doesn’t require a team of FPGA experts to design in. Microchip says the initiative is designed to bring “IP, hardware and tools for low-power, small form factor machine vision designs across the industrial, medical, broadcast, automotive, aerospace and defense markets.” 

That’s a lot of markets.

For those of you who haven’t been following every zig and zag in the FPGA world, Microchip acquired Microsemi back in May 2018. Back in 2010, Microsemi acquired Actel, so today’s Microchip FPGAs have direct lineage back to Actel’s technology. Actel specialized in low-power, high-security, high-reliability FPGAs that did not use the conventional SRAM-like architecture. Instead, Actel used antifuse and flash in their FPGAs, making them non-volatile to boot. 

This bit of history is important in the current context because the FPGAs at the center of this new initiative are Microchip’s PolarFire FPGAs, which bring a bevy of nice benefits to the embedded vision market. Microchip claims that PolarFire has 30-50% lower power consumption than competitive mid-range FPGAs and “five to 10 times lower static power” (which, despite some grammar and semantic issues, we can buy into.) If you’re trying to work within a tight power budget, those numbers could make a huge difference.

The “Smart Embedded Vision Initiative” includes a useful array of features and functions including:

  • Serial Digital Interface (SDI) IP to transport uncompressed video data streams over coaxial cabling in multiple speeds: HD-SDI (1.485 Gbps, 720p, 1080i), 3G-SDI (2.970 Gbps, 1080p60), 6G-SDI (5.94 Gbps, 2Kp30) and 12G-SDI (11.88 Gbps, 2Kp60).
  • 1.5 Gbps per lane MIPI-CSI-2 IP, which is typically used in industrial cameras. MIPI-CSI-2 is a sensor interface that links image sensors to FPGAs. The PolarFire family supports receive speeds up to 1.5 Gbps per lane and transmit speeds up to 1 Gbps per lane.
  • 2.3 Gbps per lane SLVS-EC Rx, which is an image sensor interface IP supporting high-resolution cameras. Customers can implement a two-lane or eight-lane SLVS-EC Rx FPGA core.
  • Multi-rate Gigabit MAC supporting 1, 2.5, 5 and 10 Gbps speeds over an Ethernet PHY, enabling Universal Serial 10 GE Media Independent Interface (USXGMII) MAC IP with auto-negotiation.
  • 6.25 Gbps CoaXPress v1.1 Host and Device IP, which is a standard used in high-performance machine vision, medical, and industrial inspection. Microchip also plans to support CoaXPress v2.0, which doubles the bandwidth to 12.5 Gbps.
  • HDMI 2.0b – The HDMI IP core today supports resolutions up to 4K at 60 fps transmit and 1080p at 60 fps receive.
  • Imaging IP bundle, which features the MIPI-CSI-2 and includes image processing IPs for edge detection, alpha blending and image enhancement for color, brightness and contrast adjustments.

The Smart Embedded Vision initiative also includes a network of partners including Kaya Instruments – which provides PolarFire FPGA IP Cores for CoaXPress v2.0 and 10 GigE vision, Alma Technology, Bitec, and ASIC Design Services, which provides a Core Deep Learning (CDL) framework that enables a power-efficient Convolutional Neural Network (CNN)-based imaging, and video platform for embedded and edge computing applications.

With Xilinx and Intel – the two big players in FPGA – focusing more of their energy in data center and 5G rollouts, Microchip could capture a very large niche in the rapidly growing embedded vision space. Their devices have some compelling advantages, particularly in the area of power consumption, and their collection of IP and tools should make designing-in not exactly a snap, but far easier than bringing up video and AI/CNN applications from bare metal using FPGA design tools. 

Leave a Reply

featured blogs
Sep 25, 2020
[From the last episode: We looked at different ways of accessing a single bit in a memory, including the use of multiplexors.] Today we'€™re going to look more specifically at memory cells '€“ these things we'€™ve been calling bit cells. We mentioned that there are many...
Sep 25, 2020
Normally, in May, I'd have been off to Unterschleißheim, a suburb of Munich where historically we've held what used to be called CDNLive EMEA. We renamed this CadenceLIVE Europe and... [[ Click on the title to access the full blog on the Cadence Community site...
Sep 24, 2020
I just saw a video from 2012 in which Jeri Ellsworth is strolling around a Makerfaire flaunting her Commodore 64-based bass guitar....
Sep 24, 2020
Samtec works with system architects in the early stages of their design to create solutions for cable management which provide even distribution of thermal load. Using ultra-low skew twinax cable to route signals over the board is a key performance enabler as signal integrity...

Featured Video

AI SoC Chats: IP for In-Memory / Near-Memory Compute

Sponsored by Synopsys

AI chipsets are data hungry and have high compute intensity, leading to potential power consumption issues. Join Synopsys Fellow Jamil Kawa to learn how in-memory or near-memory compute, 3D stacking, and other innovations can address the challenges of making chips think like the human brain.

Click here for more information about DesignWare IP for Amazing AI

Featured Paper

The Cryptography Handbook

Sponsored by Maxim Integrated

The Cryptography Handbook is designed to be a quick study guide for a product development engineer, taking an engineering rather than theoretical approach. In this series, we start with a general overview and then define the characteristics of a secure cryptographic system. We then describe various cryptographic concepts and provide an implementation-centric explanation of physically unclonable function (PUF) technology. We hope that this approach will give the busy engineer a quick understanding of the basic concepts of cryptography and provide a relatively fast way to integrate security in his/her design.

Click here to download the whitepaper

Featured Chalk Talk

ROHM BD71847AMWV PMIC for the NXP i.MM 8M Mini

Sponsored by Mouser Electronics and ROHM Semiconductor

Designing-in a power supply for today’s remarkable applications processors can be a hurdle for many embedded design teams. Creating a solutions that’s small, efficient, and inexpensive demands considerable engineering time and expertise. In this episode of Chalk Talk, Amelia Dalton chats with Kristopher Bahar of ROHM about some new power management ICs that are small, efficient, and inexpensive.

Click here for more information about ROHM Semiconductor BD71847AMWV Programmable Power Management IC