feature article
Subscribe Now

Driving Down Intelligent Vision

Microchip Solution Lowers the Bar

For FPGAs, machine vision represents perhaps the ultimate killer app. First, the video landscape is varied enough that there is no “one size fits all” way to manage the data coming into and out of the compute engine. With so many resolutions, frame rates, encoding standards, and so forth – the permutations are dizzying. So dizzying, in fact, that FPGAs have become a go-to solution for design teams wanting to make any kind of video solution robust and agnostic when it comes to video in and out.

So, since you’ve already got an FPGA or two in there anyway…

With the advent of AI-based vision algorithms, we have a need for very high performance and low power inferencing of convolutional neural network models. In vision, we can often quantize models down to very narrow bit-widths and still retain adequate accuracy. This is the domain where FPGA fabric shines. Quantized, optimized models can be realized in FPGAs at remarkable performance levels, using only a tiny fraction of the power that would be required for software implementations using conventional processors. The vast arrays of multipliers/DSP resources on many FPGAs bring incredible performance to the table  – both throughput and latency – with a very small power tab.

The flexibility of FPGA devices means that an infinite variety of video formats, video filtering and processing algorithms, and CNN models can be implemented on the same device – a near slam-dunk if you are creating edge-based systems that need to know what they’re looking at. The combo of problems for edge embedded vision plays to the sweet spot of FPGAs in just about every possible way.

But, of course, there are a couple of catches.

First, the problem has traditionally required a pretty high-end FPGA. That means power and cost are not in line with everyone’s BOM budgets. As you get closer to the bleeding edge on FPGA technology, the margins get healthier and the price per part becomes prohibitive for many higher-volume applications. When designing something like a security camera for broad distribution, high-end FPGAs are extremely prohibitive.

Second, FPGAs are just plain hard to use. Despite the best efforts of FPGA companies over the past decades, doing a design from scratch for an FPGA requires a competent team of FPGA experts and a few months of design time. Many companies moving into the embedded vision space simply do not have easy access to that kind of engineering expertise, and that makes designing-in an FPGA a risky and costly prospect for them.

Now, however, the bar is being lowered. Microchip this week announced their “Smart Embedded Vision Initiative,” which aims to bring robust, low-cost, low-power, high-performance vision to the table that doesn’t require a team of FPGA experts to design in. Microchip says the initiative is designed to bring “IP, hardware and tools for low-power, small form factor machine vision designs across the industrial, medical, broadcast, automotive, aerospace and defense markets.” 

That’s a lot of markets.

For those of you who haven’t been following every zig and zag in the FPGA world, Microchip acquired Microsemi back in May 2018. Back in 2010, Microsemi acquired Actel, so today’s Microchip FPGAs have direct lineage back to Actel’s technology. Actel specialized in low-power, high-security, high-reliability FPGAs that did not use the conventional SRAM-like architecture. Instead, Actel used antifuse and flash in their FPGAs, making them non-volatile to boot. 

This bit of history is important in the current context because the FPGAs at the center of this new initiative are Microchip’s PolarFire FPGAs, which bring a bevy of nice benefits to the embedded vision market. Microchip claims that PolarFire has 30-50% lower power consumption than competitive mid-range FPGAs and “five to 10 times lower static power” (which, despite some grammar and semantic issues, we can buy into.) If you’re trying to work within a tight power budget, those numbers could make a huge difference.

The “Smart Embedded Vision Initiative” includes a useful array of features and functions including:

  • Serial Digital Interface (SDI) IP to transport uncompressed video data streams over coaxial cabling in multiple speeds: HD-SDI (1.485 Gbps, 720p, 1080i), 3G-SDI (2.970 Gbps, 1080p60), 6G-SDI (5.94 Gbps, 2Kp30) and 12G-SDI (11.88 Gbps, 2Kp60).
  • 1.5 Gbps per lane MIPI-CSI-2 IP, which is typically used in industrial cameras. MIPI-CSI-2 is a sensor interface that links image sensors to FPGAs. The PolarFire family supports receive speeds up to 1.5 Gbps per lane and transmit speeds up to 1 Gbps per lane.
  • 2.3 Gbps per lane SLVS-EC Rx, which is an image sensor interface IP supporting high-resolution cameras. Customers can implement a two-lane or eight-lane SLVS-EC Rx FPGA core.
  • Multi-rate Gigabit MAC supporting 1, 2.5, 5 and 10 Gbps speeds over an Ethernet PHY, enabling Universal Serial 10 GE Media Independent Interface (USXGMII) MAC IP with auto-negotiation.
  • 6.25 Gbps CoaXPress v1.1 Host and Device IP, which is a standard used in high-performance machine vision, medical, and industrial inspection. Microchip also plans to support CoaXPress v2.0, which doubles the bandwidth to 12.5 Gbps.
  • HDMI 2.0b – The HDMI IP core today supports resolutions up to 4K at 60 fps transmit and 1080p at 60 fps receive.
  • Imaging IP bundle, which features the MIPI-CSI-2 and includes image processing IPs for edge detection, alpha blending and image enhancement for color, brightness and contrast adjustments.

The Smart Embedded Vision initiative also includes a network of partners including Kaya Instruments – which provides PolarFire FPGA IP Cores for CoaXPress v2.0 and 10 GigE vision, Alma Technology, Bitec, and ASIC Design Services, which provides a Core Deep Learning (CDL) framework that enables a power-efficient Convolutional Neural Network (CNN)-based imaging, and video platform for embedded and edge computing applications.

With Xilinx and Intel – the two big players in FPGA – focusing more of their energy in data center and 5G rollouts, Microchip could capture a very large niche in the rapidly growing embedded vision space. Their devices have some compelling advantages, particularly in the area of power consumption, and their collection of IP and tools should make designing-in not exactly a snap, but far easier than bringing up video and AI/CNN applications from bare metal using FPGA design tools. 

Leave a Reply

featured blogs
Apr 16, 2021
The Team RF "μWaveRiders" blog series is a showcase for Cadence AWR RF products. Monthly topics will vary between Cadence AWR Design Environment release highlights, feature videos, Cadence... [[ Click on the title to access the full blog on the Cadence Community...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

Featured Chalk Talk

Next Generation Connectivity and Control Concepts for Industry 4.0

Sponsored by Mouser Electronics and Molex

Industry 4.0 promises major improvements in terms of efficiency, reduced downtime, automation, monitoring, and control. But Industry 4.0 also demands a new look at our interconnect solutions. In this episode of Chalk Talk, Amelia Dalton chats with Mark Schuerman of Molex about Industry 4.0 and how to choose the right connectors for your application.

Click here for more information about Molex Industry 4.0 Solutions