April 10, 2012

Envisioning the Future

Embedded Vision Lunges Forward

by Kevin Morris

Last summer, we published an article welcoming the Embedded Vision Alliance to the world. With the incredible processing power available today - particularly when you consider the massive acceleration possibilities with devices like FPGAs and GPUs - real embedded vision becomes a realistic possibility. Making that possibility into reality, however, is an enormous task, requiring collaboration from dozens of companies and academia.

When we talk about embedded vision, we're not talking about just bolting a camera onto your embedded system. Devices that simply capture, reformat, record, or transmit video don't count. In order to count for our purposes here, your system needs to actually understand what it's seeing and do something useful with that information. When it comes to problem complexity, that's a whole 'nother kettle of fish.

Machine vision has been a research topic for decades. Hundreds of academic and research papers and projects have led to an enormous body of published work. Researchers have tackled everything from ambitious broad-based vision algorithms to focused topics like facial recognition, gesture recognition, and object identification. The fundamental software algorithms involved in machine vision are highly complex and in a state of constant flux and improvement. The catalog of software doing machine-vision-related tasks is enormous.

As much work as there has been in machine vision, however, practical application of the technology is in its infancy. There have been several obstacles to commercialization of embedded vision technology on a large scale. First, even though there is a huge amount of research and software out there in the world, there is no standard approach that works for the general case. The choice of algorithm depends heavily on the specific problem being solved, and the quality of results required for most real-word commercial applications can be achieved only through exhaustive specialized tuning. You can't just drop a piece of embedded vision software into your system and have it immediately start recognizing that "the man is giving something to the woman."

The second major obstacle has been compute power. Most of the research projects so far have relied on post-processing of captured video - at what amounts to a terrible frame rate - sometimes far below one frame per second of processing speed. In order to do real-time vision, academic algorithms would need at least two to three orders of magnitude more of compute acceleration. If you want that kind of compute power in an embedded system, your options dwindle considerably.

This is where technologies like FPGAs and GPUs come to the forefront. Vision algorithms (by their very nature) are the kind of compute problems that are highly parallelizable. Since the algorithms are currently in an extreme state of flux, and since there are dramatic differences in the approach one would take for different sub-areas of embedded vision, it is unlikely that we’ll see application-specific standard parts (ASSPs) addressing embedded vision in an off-the-shelf way any time soon.

In order to do a commercial application that includes embedded vision today, you’re likely to need an accelerator like an FPGA or GPU that would allow you to accelerate the portions of your chosen algorithm that are performance intensive. In general, FPGAs will be able to achieve better performance and throughput at a lower power consumption than GPUs, but GPUs will be generally easier to program. The FPGA industry has been working hard to close that programming gap, however, so FPGAs may well become the go-to technology in vision applications. The advent of new hybrid processor/FPGA devices like Xilinx’s Zynq or Altera’s upcoming SoC FPGAs promises even better things for vision applications as the programmable hardware fabric can be used to accelerate critical algorithms and get data onto and off of the chip efficiently, while the embedded processing system does the less performance-critical housekeeping and control tasks. 

The Embedded Vision alliance held an editor and analyst event in parallel with the recent Design West conference in San Jose. As we described previously, the mission of the Embedded Vision Alliance is to bring together companies and organizations with an interest in the development and commercialization of vision technologies. At the moment, the list of member companies is dominated by semiconductor and other hardware companies whose products could be used in an embedded vision system - primarily to help solve the compute acceleration problem. However, a growing number of participants come from the software side as well. 

Commercializing embedded vision will take a concerted, collaborative effort from all of these players. The problem being solved is possibly one of the most complex ever tackled in an embedded computing space. Sorting and sifting through the massive amounts of research and finding the relevant, valuable technologies to push forward is a monumental task in itself. Then, turning academic ideas that work well in the lab into things that are manufacturable and perform well enough for commercial applications is an additional challenge.

The launch of Microsoft’s Kinect caused an explosion in awareness and experimentation in embedded vision technology. Kinect demonstrated some impressive capabilities in a mass-produced, low-cost system that set new standards for the industry. Following on the heels of that, we are on the verge of a revolution in the automotive industry centered around embedded vision technology. Within a few years, vision-based (or partially vision-based) systems like lane departure, collision warning and avoidance, adaptive cruise control, road sign recognition, automatic parking, and driver alertness monitoring will find their way into mass-production automobiles.

In other areas, industrial automation has already widely adopted machine vision for localized tasks, but broader application of the technology promises to far exceed the capabilities of today’s limited applications. Quality inspection, robotic guidance, and factory automation are obvious applications. 

The security and defense industries are looking to implement some of the most ambitious (and creepy) applications of embedded vision technology. With the proliferation of high-capability, low-cost cameras for observation, the bottleneck in the surveillance and security areas quickly becomes human observers to watch all those screens. Intelligent systems that understand the activities they are monitoring can have a dramatic impact on the capabilities of surveillance systems. Computers are never looking at the wrong monitor, falling asleep, or munching on potato chips when the relevant action starts on one camera.

Despite the huge advances in machine vision technology, we’re just at the beginning. Over the next few years, look for an explosion of commercial applications and technology opportunities in the vision space. We expect devices like FPGAs and GPUs to be at the center - enabling much of this new technology. For those of us in the design community, the opportunities to participate and compete with innovative ideas and products should be rich.

Channels

FPGA.

 
    submit to reddit  

Comments:

You must be logged in to leave a reply. Login »

Related Articles

Why Dont They Just

The Quest for Truth in Engineering

by Kevin Morris

Why dont they just put solar cells on top of cars and power them that way?

His tone implied that the engineers designing cars were just idiots, and he was sure he could do better - with just this one idea. I was going to answer with some helpful information about the amount of energy required to operate an automobile, the amount of energy collected by even idealized solar cells, and the amount of area available on top of a typical vehicle. I didnt get the chance.

His friend interrupted, Well thats just the government shutting them down. The oil companies have the government in their pocket, and theyre not about to let anyone develop technology like that. Its the same with that 200 MPG carburetor that guy in Florida invented

Now, I was more hesitant to speak. I wanted to explain that modern, sensor-driven, computer-controlled fuel injection systems did a much better job achieving near-ideal fuel-air mixtures than any carburetor could ever hope to accomplish. I didnt get the chance....

FPGA-Prototyping Simplified

Cadence Rolls New Protium Platform

by Kevin Morris

System on Chip (SoC) design today is an incredibly complicated collaborative endeavor. By applying the label System to the chips we design, we enter a...

FPGAs in the IoT

Lattice iCE40 Ultra Brings Programmability to Wearables

by Kevin Morris

In the 1960s, an electronic device was cool if it had the word transistor in it. Even though the general public didnt understand the...

X Marks the Spot

Thwarting Pirates with AI and X-fest 2014

by Amelia Dalton

At Attention Ye Salty Dogs! Hoist the mizzen mast, Fish Fry is ready to set sail! This week Jim Beneke comes aboard our mighty Fish...

Black Helicopters

The Career-Limiting Blame Trap

by Kevin Morris

John had been working for almost six weeks on a single part of the design persistence code. He had made no discernable progress. His original...

Related Blog Posts

Improved FPGA Tool Results

by Bryon Moyer

Plunify tries to get the best out of FPGA design tools

On the Scene: Project Ara

by Amelia Dalton

The race hasn't yet begun. In fact, we're not even on the starting block, but the rule book for this race - the race to...

Moving Back to Software

by Bryon Moyer

TIs new substation collaboration allows a software solution to replace hardware the reverse of what typically happens as a market matures. Whats up...

Ban Power Consumption

by Bryon Moyer

When you think about it, power consumption makes no sense as a concept. This is for those of you that like to choose your words...

Smoother IP to SoC Prototyping

by Bryon Moyer

Some folks have to prove out IP; others have to integrate IP into an SoC. And some people or groups do both, first building IP...

  • Feature Articles RSS
  • Comment on this article
  • Print this article

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register