There’s a reason the bad guys are called “shadowy figures.” They like to hide in the near-darkness – to avoid detection. In the course of conducting their criminal acts, they stealthily slink along in the shadows, confident that the auto-exposure on the security cameras will render the devices oblivious to their presence.
Wrong-o, fictitious evildoers.
FPGA-powered high-dynamic-range (HDR) cameras are on duty, and they can see right through your little shade charade.
We’ve all experienced the problem with our vacation photos. If the beautiful sunset is properly exposed, our posing partner is nothing but a black silhouette. If we properly expose the person, the sky is a wash of white. What we want is to extend the dynamic range of the camera so we can have both the light and dark areas properly exposed. Image sensors just don’t have the dynamic range of our combined eyes and brain. If we’re lucky enough to have a camera with HDR capability, however, it can get pretty close. HDR cameras take 2 or more images at different exposure settings – one bright and one dark, for example, then merge them together into one image – keeping the properly exposed areas from each.
If you have an iPhone, you may have played with one of the many apps that let you manually create HDR photos. One thing you’ll notice right away is that there is a lot of processing required for the alignment, merging, and rendering of the finished HDR image. It takes several seconds at least.
Now, let’s say you want to try that same trick with high-definition video – like 1080p60. Your little camera will be grabbing, for purposes of our example, 120 frames of HD video per second, or 180 frames per second (depending if you’re using 2 or 3 images to create each HDR frame). Each set of exposures will include a range from dark to bright. Your HDR processing engine has about 1/60 second do finish its work before the next set of images comes along. If that sounds like too much work for even a high-end DSP processor, you’re right. It’s a great job for an FPGA, though.
Unfortunately, many of the engineers that work in the area of HDR security cameras are experts on stuff like… HDR video, instead of stuff like… how to write complex video signal-processing datapaths in HDL and synthesize, place, and route them in an FPGA. In fact, in 2009, Lattice Semiconductor did a little poll on FPGA use among attendees at ISC West (International Security Conference and Exposition), and over 75% of manufacturers responded with, “What’s an FPGA?”
Fast forward one year, and over 50% of attendees at ISC West 2010 said, “We use FPGA / We are considering FPGA use.” Wow. That’s a lot of people needing to learn a lot about FPGAs in a short time. Getting all those people from zero to “proficient enough to use an FPGA in a high-performance camera design” in the space of a year or so is a daunting task. (By “daunting” we mean, of course, “impossible.”)
Security cameras are just one of the areas where there are large numbers of engineers needing to use FPGAs for the first time. That’s why we’ve seen such a big trend from the FPGA companies in ready-to-run development kits for specific applications that work pretty much out of the box: in this case, the Lattice HDR-60 Video Camera Development Kit – produced by Lattice in conjunction with Helion (a company that specializes in image processing IP) and Aptina (a maker of high-performance image sensors). Each company represented by leg of this triangle has a lot of experience in its part. Lattice has sold over a billion programmable logic devices; Aptina has sold over a billion sensors; and Helion has more than 15 years in HDR/WDR image processing and offers more than 90 image-processing IP cores.
This trinity has come together to produce a high-performance FPGA-based HDR camera kit that comes right out of the box doing HD HDR Video. The camera is full-HD 1080p60 with HDR and features like advanced auto-exposure and auto-white balance. If you’re doing security or surveillance applications and you’ve got $399, you’re almost ready to go — and you haven’t written a single line of HDL. The dev kit has a video camera main board with a Lattice ECP3-70 FPGA, a Broadcom Broadreach PHY, USB, BNC, and HDMI connectors – and the goodies that support them. Plugged into the end is a NanoVista Head Board with Aptina Sensor and Sunex Lens. The board even has connectors and support for twin sensors for more advanced video stitching applications. Plug it in, power it up, and you’ll be looking at a demo application showing the “normal” and “HDR” views of whatever the camera sees on your HDMI/DVI monitor.
If you want to, you can even use the development board for full production. You just need to license the IP you plan to ship with your product, add whatever magic will make your HDR camera superior to all those other posers on the market, and you’re ready to start shipping product. Well, mostly ready, anyway.
As far as the FPGA goes, the IP for the HDR and other processing still leaves plenty of fabric for you to add other magic to the FPGA. The Lattice Mico-32 soft-core processor manages the control functions, and the IP peripherals are connected to that via a Wishbone-compatible bus interconnect.
The result is a highly capable, almost-ready-to-ship camera with all the flexibility you need in a development board to customize it for whatever application you have in mind – from the obvious security and surveillance to related applications like traffic monitoring, automotive apps, and video conferencing. Even if you’re just starting to figure out FPGAs, you can take advantage of the work of experienced FPGA designers via the available IP blocks, and you’ll come off looking like a genius to your project team. “Wow, last year you didn’t know what an FPGA was, and here you’ve given us a whole HDR camera design based on one. You’re amazing.”
Your secret’s safe with us.