feature article
Subscribe Now

Driving Down Intelligent Vision

Microchip Solution Lowers the Bar

For FPGAs, machine vision represents perhaps the ultimate killer app. First, the video landscape is varied enough that there is no “one size fits all” way to manage the data coming into and out of the compute engine. With so many resolutions, frame rates, encoding standards, and so forth – the permutations are dizzying. So dizzying, in fact, that FPGAs have become a go-to solution for design teams wanting to make any kind of video solution robust and agnostic when it comes to video in and out.

So, since you’ve already got an FPGA or two in there anyway…

With the advent of AI-based vision algorithms, we have a need for very high performance and low power inferencing of convolutional neural network models. In vision, we can often quantize models down to very narrow bit-widths and still retain adequate accuracy. This is the domain where FPGA fabric shines. Quantized, optimized models can be realized in FPGAs at remarkable performance levels, using only a tiny fraction of the power that would be required for software implementations using conventional processors. The vast arrays of multipliers/DSP resources on many FPGAs bring incredible performance to the table  – both throughput and latency – with a very small power tab.

The flexibility of FPGA devices means that an infinite variety of video formats, video filtering and processing algorithms, and CNN models can be implemented on the same device – a near slam-dunk if you are creating edge-based systems that need to know what they’re looking at. The combo of problems for edge embedded vision plays to the sweet spot of FPGAs in just about every possible way.

But, of course, there are a couple of catches.

First, the problem has traditionally required a pretty high-end FPGA. That means power and cost are not in line with everyone’s BOM budgets. As you get closer to the bleeding edge on FPGA technology, the margins get healthier and the price per part becomes prohibitive for many higher-volume applications. When designing something like a security camera for broad distribution, high-end FPGAs are extremely prohibitive.

Second, FPGAs are just plain hard to use. Despite the best efforts of FPGA companies over the past decades, doing a design from scratch for an FPGA requires a competent team of FPGA experts and a few months of design time. Many companies moving into the embedded vision space simply do not have easy access to that kind of engineering expertise, and that makes designing-in an FPGA a risky and costly prospect for them.

Now, however, the bar is being lowered. Microchip this week announced their “Smart Embedded Vision Initiative,” which aims to bring robust, low-cost, low-power, high-performance vision to the table that doesn’t require a team of FPGA experts to design in. Microchip says the initiative is designed to bring “IP, hardware and tools for low-power, small form factor machine vision designs across the industrial, medical, broadcast, automotive, aerospace and defense markets.” 

That’s a lot of markets.

For those of you who haven’t been following every zig and zag in the FPGA world, Microchip acquired Microsemi back in May 2018. Back in 2010, Microsemi acquired Actel, so today’s Microchip FPGAs have direct lineage back to Actel’s technology. Actel specialized in low-power, high-security, high-reliability FPGAs that did not use the conventional SRAM-like architecture. Instead, Actel used antifuse and flash in their FPGAs, making them non-volatile to boot. 

This bit of history is important in the current context because the FPGAs at the center of this new initiative are Microchip’s PolarFire FPGAs, which bring a bevy of nice benefits to the embedded vision market. Microchip claims that PolarFire has 30-50% lower power consumption than competitive mid-range FPGAs and “five to 10 times lower static power” (which, despite some grammar and semantic issues, we can buy into.) If you’re trying to work within a tight power budget, those numbers could make a huge difference.

The “Smart Embedded Vision Initiative” includes a useful array of features and functions including:

  • Serial Digital Interface (SDI) IP to transport uncompressed video data streams over coaxial cabling in multiple speeds: HD-SDI (1.485 Gbps, 720p, 1080i), 3G-SDI (2.970 Gbps, 1080p60), 6G-SDI (5.94 Gbps, 2Kp30) and 12G-SDI (11.88 Gbps, 2Kp60).
  • 1.5 Gbps per lane MIPI-CSI-2 IP, which is typically used in industrial cameras. MIPI-CSI-2 is a sensor interface that links image sensors to FPGAs. The PolarFire family supports receive speeds up to 1.5 Gbps per lane and transmit speeds up to 1 Gbps per lane.
  • 2.3 Gbps per lane SLVS-EC Rx, which is an image sensor interface IP supporting high-resolution cameras. Customers can implement a two-lane or eight-lane SLVS-EC Rx FPGA core.
  • Multi-rate Gigabit MAC supporting 1, 2.5, 5 and 10 Gbps speeds over an Ethernet PHY, enabling Universal Serial 10 GE Media Independent Interface (USXGMII) MAC IP with auto-negotiation.
  • 6.25 Gbps CoaXPress v1.1 Host and Device IP, which is a standard used in high-performance machine vision, medical, and industrial inspection. Microchip also plans to support CoaXPress v2.0, which doubles the bandwidth to 12.5 Gbps.
  • HDMI 2.0b – The HDMI IP core today supports resolutions up to 4K at 60 fps transmit and 1080p at 60 fps receive.
  • Imaging IP bundle, which features the MIPI-CSI-2 and includes image processing IPs for edge detection, alpha blending and image enhancement for color, brightness and contrast adjustments.

The Smart Embedded Vision initiative also includes a network of partners including Kaya Instruments – which provides PolarFire FPGA IP Cores for CoaXPress v2.0 and 10 GigE vision, Alma Technology, Bitec, and ASIC Design Services, which provides a Core Deep Learning (CDL) framework that enables a power-efficient Convolutional Neural Network (CNN)-based imaging, and video platform for embedded and edge computing applications.

With Xilinx and Intel – the two big players in FPGA – focusing more of their energy in data center and 5G rollouts, Microchip could capture a very large niche in the rapidly growing embedded vision space. Their devices have some compelling advantages, particularly in the area of power consumption, and their collection of IP and tools should make designing-in not exactly a snap, but far easier than bringing up video and AI/CNN applications from bare metal using FPGA design tools. 

Leave a Reply

featured blogs
Jul 20, 2024
If you are looking for great technology-related reads, here are some offerings that I cannot recommend highly enough....

featured video

Unleashing Limitless AI Possibilities with FPGAs

Sponsored by Intel

Industry experts discuss real-world AI solutions based on Programmable Logic, or FPGAs. The panel talks about a new approach called FPGAi, what it is and how it will revolutionize how innovators design AI applications.

Click here to learn more about Leading the New Era of FPGAi

featured paper

Navigating design challenges: block/chip design-stage verification

Sponsored by Siemens Digital Industries Software

Explore the future of IC design with the Calibre Shift left initiative. In this paper, author David Abercrombie reveals how Siemens is changing the game for block/chip design-stage verification by moving Calibre verification and reliability analysis solutions further left in the design flow, including directly inside your P&R tool cockpit. Discover how you can reduce traditional long-loop verification iterations, saving time, improving accuracy, and dramatically boosting productivity.

Click here to read more

featured chalk talk

Accessing AWS IoT Services Securely over LTE-M
Developing a connected IoT design from scratch can be a complicated endeavor. In this episode of Chalk Talk, Amelia Dalton, Harald Kröll from u-blox, Lucio Di Jasio from AWS, and Rob Reynolds from SparkFun Electronics examine the details of the AWS IoT ExpressLink SARA-R5 starter kit. They explore the common IoT development design challenges that AWS IoT ExpressLink SARA-R5 starter kit is looking to solve and how you can get started using this kit in your next connected IoT design.
Oct 26, 2023
32,514 views