feature article
Subscribe Now

Driving Down Intelligent Vision

Microchip Solution Lowers the Bar

For FPGAs, machine vision represents perhaps the ultimate killer app. First, the video landscape is varied enough that there is no “one size fits all” way to manage the data coming into and out of the compute engine. With so many resolutions, frame rates, encoding standards, and so forth – the permutations are dizzying. So dizzying, in fact, that FPGAs have become a go-to solution for design teams wanting to make any kind of video solution robust and agnostic when it comes to video in and out.

So, since you’ve already got an FPGA or two in there anyway…

With the advent of AI-based vision algorithms, we have a need for very high performance and low power inferencing of convolutional neural network models. In vision, we can often quantize models down to very narrow bit-widths and still retain adequate accuracy. This is the domain where FPGA fabric shines. Quantized, optimized models can be realized in FPGAs at remarkable performance levels, using only a tiny fraction of the power that would be required for software implementations using conventional processors. The vast arrays of multipliers/DSP resources on many FPGAs bring incredible performance to the table  – both throughput and latency – with a very small power tab.

The flexibility of FPGA devices means that an infinite variety of video formats, video filtering and processing algorithms, and CNN models can be implemented on the same device – a near slam-dunk if you are creating edge-based systems that need to know what they’re looking at. The combo of problems for edge embedded vision plays to the sweet spot of FPGAs in just about every possible way.

But, of course, there are a couple of catches.

First, the problem has traditionally required a pretty high-end FPGA. That means power and cost are not in line with everyone’s BOM budgets. As you get closer to the bleeding edge on FPGA technology, the margins get healthier and the price per part becomes prohibitive for many higher-volume applications. When designing something like a security camera for broad distribution, high-end FPGAs are extremely prohibitive.

Second, FPGAs are just plain hard to use. Despite the best efforts of FPGA companies over the past decades, doing a design from scratch for an FPGA requires a competent team of FPGA experts and a few months of design time. Many companies moving into the embedded vision space simply do not have easy access to that kind of engineering expertise, and that makes designing-in an FPGA a risky and costly prospect for them.

Now, however, the bar is being lowered. Microchip this week announced their “Smart Embedded Vision Initiative,” which aims to bring robust, low-cost, low-power, high-performance vision to the table that doesn’t require a team of FPGA experts to design in. Microchip says the initiative is designed to bring “IP, hardware and tools for low-power, small form factor machine vision designs across the industrial, medical, broadcast, automotive, aerospace and defense markets.” 

That’s a lot of markets.

For those of you who haven’t been following every zig and zag in the FPGA world, Microchip acquired Microsemi back in May 2018. Back in 2010, Microsemi acquired Actel, so today’s Microchip FPGAs have direct lineage back to Actel’s technology. Actel specialized in low-power, high-security, high-reliability FPGAs that did not use the conventional SRAM-like architecture. Instead, Actel used antifuse and flash in their FPGAs, making them non-volatile to boot. 

This bit of history is important in the current context because the FPGAs at the center of this new initiative are Microchip’s PolarFire FPGAs, which bring a bevy of nice benefits to the embedded vision market. Microchip claims that PolarFire has 30-50% lower power consumption than competitive mid-range FPGAs and “five to 10 times lower static power” (which, despite some grammar and semantic issues, we can buy into.) If you’re trying to work within a tight power budget, those numbers could make a huge difference.

The “Smart Embedded Vision Initiative” includes a useful array of features and functions including:

  • Serial Digital Interface (SDI) IP to transport uncompressed video data streams over coaxial cabling in multiple speeds: HD-SDI (1.485 Gbps, 720p, 1080i), 3G-SDI (2.970 Gbps, 1080p60), 6G-SDI (5.94 Gbps, 2Kp30) and 12G-SDI (11.88 Gbps, 2Kp60).
  • 1.5 Gbps per lane MIPI-CSI-2 IP, which is typically used in industrial cameras. MIPI-CSI-2 is a sensor interface that links image sensors to FPGAs. The PolarFire family supports receive speeds up to 1.5 Gbps per lane and transmit speeds up to 1 Gbps per lane.
  • 2.3 Gbps per lane SLVS-EC Rx, which is an image sensor interface IP supporting high-resolution cameras. Customers can implement a two-lane or eight-lane SLVS-EC Rx FPGA core.
  • Multi-rate Gigabit MAC supporting 1, 2.5, 5 and 10 Gbps speeds over an Ethernet PHY, enabling Universal Serial 10 GE Media Independent Interface (USXGMII) MAC IP with auto-negotiation.
  • 6.25 Gbps CoaXPress v1.1 Host and Device IP, which is a standard used in high-performance machine vision, medical, and industrial inspection. Microchip also plans to support CoaXPress v2.0, which doubles the bandwidth to 12.5 Gbps.
  • HDMI 2.0b – The HDMI IP core today supports resolutions up to 4K at 60 fps transmit and 1080p at 60 fps receive.
  • Imaging IP bundle, which features the MIPI-CSI-2 and includes image processing IPs for edge detection, alpha blending and image enhancement for color, brightness and contrast adjustments.

The Smart Embedded Vision initiative also includes a network of partners including Kaya Instruments – which provides PolarFire FPGA IP Cores for CoaXPress v2.0 and 10 GigE vision, Alma Technology, Bitec, and ASIC Design Services, which provides a Core Deep Learning (CDL) framework that enables a power-efficient Convolutional Neural Network (CNN)-based imaging, and video platform for embedded and edge computing applications.

With Xilinx and Intel – the two big players in FPGA – focusing more of their energy in data center and 5G rollouts, Microchip could capture a very large niche in the rapidly growing embedded vision space. Their devices have some compelling advantages, particularly in the area of power consumption, and their collection of IP and tools should make designing-in not exactly a snap, but far easier than bringing up video and AI/CNN applications from bare metal using FPGA design tools. 

Leave a Reply

featured blogs
Jul 1, 2022
We all look for 100% perfection and want to turn our dreams (expectations) into reality as far as we can. Are you also looking for a magic wand to turn expectation into reality? The story applies to... ...
Jun 30, 2022
Learn how AI-powered cameras and neural network image processing enable everything from smartphone portraits to machine vision and automotive safety features. The post How AI Helps Cameras See More Clearly appeared first on From Silicon To Software....
Jun 28, 2022
Watching this video caused me to wander off into the weeds looking at a weird and wonderful collection of wheeled implementations....

featured video

Demo: Achronix Speedster7t 2D NoC vs. Traditional FPGA Routing

Sponsored by Achronix

This demonstration compares an FPGA design utilizing Achronix Speedster7t 2D Network on Chip (NoC) for routing signals with the FPGA device, versus using traditional FPGA routing. The 2D NoC provides a 40% reduction in logic resources required with 40% less compile time needed versus using traditional FPGA routing. Speedster7t FPGAs are optimized for high-bandwidth workloads and eliminate the performance bottlenecks associated with traditional FPGAs.

Subscribe to Achronix's YouTube channel for the latest videos on how to accelerate your data using FPGAs and eFPGA IP

featured paper

An Engineer's Guide to Designing with Precision Amplifiers

Sponsored by Texas Instruments

Engineers face many challenges when designing analog circuits. This e-book covers common topics related to these products, including operational amplifier (op amp) specifications and printed circuit board layout issues, instrumentation amplifier linear operating regions, and electrical overstress.

Click to read more

featured chalk talk

Expanding SiliconMAX SLM to In-Field

Sponsored by Synopsys

In order to keep up with the rigorous pace of today’s electronic designs, we must have visibility into each step of our IC design lifecycle including debug, bring up and in-field operation. In this episode of Chalk Talk, Amelia Dalton chats with Steve Pateras from Synopsys about in-field infrastructure for silicon lifecycle management, the role that edge analytics play when it comes to in-field optimization, and how cloud analytics, runtime agents and SiliconMAX sensor analytics can provide you more information than ever before for the lifecycle of your IC design.

Click here for more information about SiliconMAX Silicon Lifecycle Management