feature article
Subscribe Now

Driving Down Intelligent Vision

Microchip Solution Lowers the Bar

For FPGAs, machine vision represents perhaps the ultimate killer app. First, the video landscape is varied enough that there is no “one size fits all” way to manage the data coming into and out of the compute engine. With so many resolutions, frame rates, encoding standards, and so forth – the permutations are dizzying. So dizzying, in fact, that FPGAs have become a go-to solution for design teams wanting to make any kind of video solution robust and agnostic when it comes to video in and out.

So, since you’ve already got an FPGA or two in there anyway…

With the advent of AI-based vision algorithms, we have a need for very high performance and low power inferencing of convolutional neural network models. In vision, we can often quantize models down to very narrow bit-widths and still retain adequate accuracy. This is the domain where FPGA fabric shines. Quantized, optimized models can be realized in FPGAs at remarkable performance levels, using only a tiny fraction of the power that would be required for software implementations using conventional processors. The vast arrays of multipliers/DSP resources on many FPGAs bring incredible performance to the table  – both throughput and latency – with a very small power tab.

The flexibility of FPGA devices means that an infinite variety of video formats, video filtering and processing algorithms, and CNN models can be implemented on the same device – a near slam-dunk if you are creating edge-based systems that need to know what they’re looking at. The combo of problems for edge embedded vision plays to the sweet spot of FPGAs in just about every possible way.

But, of course, there are a couple of catches.

First, the problem has traditionally required a pretty high-end FPGA. That means power and cost are not in line with everyone’s BOM budgets. As you get closer to the bleeding edge on FPGA technology, the margins get healthier and the price per part becomes prohibitive for many higher-volume applications. When designing something like a security camera for broad distribution, high-end FPGAs are extremely prohibitive.

Second, FPGAs are just plain hard to use. Despite the best efforts of FPGA companies over the past decades, doing a design from scratch for an FPGA requires a competent team of FPGA experts and a few months of design time. Many companies moving into the embedded vision space simply do not have easy access to that kind of engineering expertise, and that makes designing-in an FPGA a risky and costly prospect for them.

Now, however, the bar is being lowered. Microchip this week announced their “Smart Embedded Vision Initiative,” which aims to bring robust, low-cost, low-power, high-performance vision to the table that doesn’t require a team of FPGA experts to design in. Microchip says the initiative is designed to bring “IP, hardware and tools for low-power, small form factor machine vision designs across the industrial, medical, broadcast, automotive, aerospace and defense markets.” 

That’s a lot of markets.

For those of you who haven’t been following every zig and zag in the FPGA world, Microchip acquired Microsemi back in May 2018. Back in 2010, Microsemi acquired Actel, so today’s Microchip FPGAs have direct lineage back to Actel’s technology. Actel specialized in low-power, high-security, high-reliability FPGAs that did not use the conventional SRAM-like architecture. Instead, Actel used antifuse and flash in their FPGAs, making them non-volatile to boot. 

This bit of history is important in the current context because the FPGAs at the center of this new initiative are Microchip’s PolarFire FPGAs, which bring a bevy of nice benefits to the embedded vision market. Microchip claims that PolarFire has 30-50% lower power consumption than competitive mid-range FPGAs and “five to 10 times lower static power” (which, despite some grammar and semantic issues, we can buy into.) If you’re trying to work within a tight power budget, those numbers could make a huge difference.

The “Smart Embedded Vision Initiative” includes a useful array of features and functions including:

  • Serial Digital Interface (SDI) IP to transport uncompressed video data streams over coaxial cabling in multiple speeds: HD-SDI (1.485 Gbps, 720p, 1080i), 3G-SDI (2.970 Gbps, 1080p60), 6G-SDI (5.94 Gbps, 2Kp30) and 12G-SDI (11.88 Gbps, 2Kp60).
  • 1.5 Gbps per lane MIPI-CSI-2 IP, which is typically used in industrial cameras. MIPI-CSI-2 is a sensor interface that links image sensors to FPGAs. The PolarFire family supports receive speeds up to 1.5 Gbps per lane and transmit speeds up to 1 Gbps per lane.
  • 2.3 Gbps per lane SLVS-EC Rx, which is an image sensor interface IP supporting high-resolution cameras. Customers can implement a two-lane or eight-lane SLVS-EC Rx FPGA core.
  • Multi-rate Gigabit MAC supporting 1, 2.5, 5 and 10 Gbps speeds over an Ethernet PHY, enabling Universal Serial 10 GE Media Independent Interface (USXGMII) MAC IP with auto-negotiation.
  • 6.25 Gbps CoaXPress v1.1 Host and Device IP, which is a standard used in high-performance machine vision, medical, and industrial inspection. Microchip also plans to support CoaXPress v2.0, which doubles the bandwidth to 12.5 Gbps.
  • HDMI 2.0b – The HDMI IP core today supports resolutions up to 4K at 60 fps transmit and 1080p at 60 fps receive.
  • Imaging IP bundle, which features the MIPI-CSI-2 and includes image processing IPs for edge detection, alpha blending and image enhancement for color, brightness and contrast adjustments.

The Smart Embedded Vision initiative also includes a network of partners including Kaya Instruments – which provides PolarFire FPGA IP Cores for CoaXPress v2.0 and 10 GigE vision, Alma Technology, Bitec, and ASIC Design Services, which provides a Core Deep Learning (CDL) framework that enables a power-efficient Convolutional Neural Network (CNN)-based imaging, and video platform for embedded and edge computing applications.

With Xilinx and Intel – the two big players in FPGA – focusing more of their energy in data center and 5G rollouts, Microchip could capture a very large niche in the rapidly growing embedded vision space. Their devices have some compelling advantages, particularly in the area of power consumption, and their collection of IP and tools should make designing-in not exactly a snap, but far easier than bringing up video and AI/CNN applications from bare metal using FPGA design tools. 

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'¯ Team4Tech '¯has given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Power High-Performance Applications with Renesas RA8 Series MCUs
Sponsored by Mouser Electronics and Renesas
In this episode of Chalk Talk, Amelia Dalton and Kavita Char from Renesas explore the first 32-bit MCUs based on the new Arm® Cortex® -M85 core. They investigate how these new MCUs bridge the gap between MCUs and MPUs, the advanced security features included in this new MCU portfolio, and how you can get started using the Renesas high performance RA8 series in your next design. 
Jan 9, 2024
15,059 views