editor's blog
Subscribe Now

Stable Video, Easier Development

There are times when a shaky video can be just the thing. Imagine: where would the Blair Witch Project have been without it? What would an entire generation of hipsters do without the ability to shake (ironically) their video? How are happening new producers supposed to attract new audiences without being able to make their video look shoddy and unprofessional?

But, aside from those times, shaky video is not so good. In fact, for a lot of us, even these examples aren’t great (some of them representing videos that only their producers could love). We all have a hard enough time holding our cameras still without having to add shaking as part of some obscure production value.

We already take care of this for still pictures using optical image stabilization (OIS); why not simply apply that technology to videos? Well, for the same reason you don’t use a still photo camera to shoot video. Oops! Wait… I guess these days you can. OK, then simply because video isn’t the same as a still picture, so what works for one doesn’t necessarily work for the other.

First of all, still photos are just that: still. Any movement is wrong (except for things like sports, I suppose). So, in theory, you can just neutralize any motion, period. But, as you can imagine, that’s not going to work with video – unless you want to turn your video into a still shot. No, video means motion by definition. So when stabilizing a video image, you have to figure out what motion is intended and what motion is due to unwanted shaking.

Unlike OIS, which is typically implemented as hardware embedded into the camera module, CEVA is proposing handling digital video stabilization (DVS) in software for things like smartphones, wearable electronics, and cameras mounted inside moving vehicles, all of which involve inherent shaking. And all of which benefit from low power – wearable cameras in particular.

So CEVA has put together a set of DVS functions optimized to work on the CEVA-MM3101, their imaging-oriented DSP platform. These functions come with a number of options and parameters, since one solution doesn’t necessarily solve all the problems optimally. And with no standards out there, they see this as an opportunity for their customers to differentiate their camera solutions.

Other reasons for using a programmable solution instead of a hardware version are the fact that the DSP hardware can be reused for other functions – or can combine DVS with other functions like Super-Resolution.

Their solution provides correction along four axes: the x/y/z directions and then one angular direction: roll (which they call rotation about the Z axis, so I guess the convention is that forward is along the Z axis). They also provide correction for the “Jello effect”: this is distortion that occurs due to a shutter that’s rolling while the image or camera is moving, causing something of a relativistic leaning effect. And they can scale to handle 4K Ultra HD on a single core and adapt to various lighting conditions.

But power is also critical: they say that existing solutions use around 1 W of power; they’re touting less than 35 mW for 1080p30 video when implemented on a 28-nm process.

This new set of libraries could be integrated into an application using their new Application Development Kit (ADK), which they announced at the same time. The ADK is a framework for easing application development and optimization.

One noted feature is called SmartFrame. This allows a developer to operate on an entire video frame while the underlying framework takes care of logistical details. In particular, it can tile up the frame and apply “tunneling” to multiple algorithms, which they refer to as “kernels.”

This tunneling combines a pipeline architecture with the ability to chain multiple kernels together for back-to-back execution. Without tunneling, each kernel would be called by a program, and execution would return to the calling function after the kernel completed so that the program could then call the next kernel.

Instead, the framework allows the first kernel to work on one tile and then hand that tile directly off to the next kernel in the chain while the first kernel starts work on the second tile. And so forth for additional tiles and kernels. This minimizes the amount of data copying needed, and control doesn’t return to the calling program until the entire frame has been processed by all of the kernels.

The ADK also makes it possible to call DSP offloads from CPU programs, something we saw with CEVA’s AMF announcement.

You can find out more about CEVA’s DVS capabilities in their DVS announcement, and there’s more about their ADK in their ADK announcement.

Leave a Reply

featured blogs
Sep 23, 2020
CadenceLIVE 2020 India, our first digital conference held on 9-10 September and what an event it was! With 75 technical paper presentations, four keynotes, a virtual exhibition area, and fun... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Sep 22, 2020
If you are at all interested in digital signal processing (DSP), then the DSP Online Conference is the place to '€œsee and be seen'€ -- register now before all the good seats are snapped up!...
Sep 22, 2020
I am a child of the 80s.  I grew up when the idea of home computing was very new.  My first experience of any kind of computer was an Apple II that my Dad brought home from work. It was the only computer his company possessed, and every few weeks he would need to cr...
Sep 18, 2020
[From the last episode: We put the various pieces of a memory together to show the whole thing.] Before we finally turn our memory discussion into an AI discussion, let'€™s take on one annoying little detail that I'€™ve referred to a few times, but have kept putting off. ...

Featured Video

Product Update: Family of DesignWare Ethernet IP for Time-Sensitive Networking

Sponsored by Synopsys

Hear John Swanson, our product expert, give an update on Synopsys’ DesignWare® Ethernet IP for Time-Sensitive Networking (TSN), which is compliant with IEEE standards and enables predictable guaranteed latency in automotive ADAS and industrial automation SoCs.

Click here for more information about DesignWare Ethernet Quality-of-Service Controller IP

Featured Paper

An Introduction to Automotive LIDAR

Sponsored by Texas Instruments

This white paper is an introduction to industrial and automotive time-of-flight (ToF) light detection and ranging (LIDAR) solutions to serve next-generation autonomous systems.

Click here to download the whitepaper

Featured Chalk Talk

Consumer Plus 3D NAND SD Cards

Sponsored by Panasonic

3D NAND has numerous advantages, like larger capacity, lower cost, and longer lifespan. In many systems, 3D NAND in SD card form is a smart move. In this episode of Chalk Talk, Amelia Dalton chats with Brian Donovan about SD 3D NAND in applications such as automotive.

Click here for more information about Panasonic Consumer Plus Grade 3D NAND SD Cards