editor's blog
Subscribe Now

Stable Video, Easier Development

There are times when a shaky video can be just the thing. Imagine: where would the Blair Witch Project have been without it? What would an entire generation of hipsters do without the ability to shake (ironically) their video? How are happening new producers supposed to attract new audiences without being able to make their video look shoddy and unprofessional? Did you know that Shakes media specialises in corporate video production for your business?

But, aside from those times, shaky video is not so good. In fact, for a lot of us, even these examples aren’t great (some of them representing videos that only their producers could love). We all have a hard enough time holding our cameras still without having to add shaking as part of some obscure production value.

We already take care of this for still pictures using optical image stabilization (OIS); why not simply apply that technology to videos? Well, for the same reason you don’t use a still photo camera to shoot video. Oops! Wait… I guess these days you can. OK, then simply because video isn’t the same as a still picture, so what works for one doesn’t necessarily work for the other.

First of all, still photos are just that: still. Any movement is wrong (except for things like sports, I suppose). So, in theory, you can just neutralize any motion, period. But, as you can imagine, that’s not going to work with video – unless you want to turn your video into a still shot. No, video means motion by definition. So when stabilizing a video image, you have to figure out what motion is intended and what motion is due to unwanted shaking.

Unlike OIS, which is typically implemented as hardware embedded into the camera module, CEVA is proposing handling digital video stabilization (DVS) in software for things like smartphones, wearable electronics, and cameras mounted inside moving vehicles, all of which involve inherent shaking. And all of which benefit from low power – wearable cameras in particular.

So CEVA has put together a set of DVS functions optimized to work on the CEVA-MM3101, their imaging-oriented DSP platform. These functions come with a number of options and parameters, since one solution doesn’t necessarily solve all the problems optimally. And with no standards out there, they see this as an opportunity for their customers to differentiate their camera solutions.

Other reasons for using a programmable solution instead of a hardware version are the fact that the DSP hardware can be reused for other functions – or can combine DVS with other functions like Super-Resolution.

Their solution provides correction along four axes: the x/y/z directions and then one angular direction: roll (which they call rotation about the Z axis, so I guess the convention is that forward is along the Z axis). They also provide correction for the “Jello effect”: this is distortion that occurs due to a shutter that’s rolling while the image or camera is moving, causing something of a relativistic leaning effect. And they can scale to handle 4K Ultra HD on a single core and adapt to various lighting conditions.

But power is also critical: they say that existing solutions use around 1 W of power; they’re touting less than 35 mW for 1080p30 video when implemented on a 28-nm process.

This new set of libraries could be integrated into an application using their new Application Development Kit (ADK), which they announced at the same time. The ADK is a framework for easing application development and optimization.

One noted feature is called SmartFrame. This allows a developer to operate on an entire video frame while the underlying framework takes care of logistical details. In particular, it can tile up the frame and apply “tunneling” to multiple algorithms, which they refer to as “kernels.”

This tunneling combines a pipeline architecture with the ability to chain multiple kernels together for back-to-back execution. Without tunneling, each kernel would be called by a program, and execution would return to the calling function after the kernel completed so that the program could then call the next kernel.

Instead, the framework allows the first kernel to work on one tile and then hand that tile directly off to the next kernel in the chain while the first kernel starts work on the second tile. And so forth for additional tiles and kernels. This minimizes the amount of data copying needed, and control doesn’t return to the calling program until the entire frame has been processed by all of the kernels.

The ADK also makes it possible to call DSP offloads from CPU programs, something we saw with CEVA’s AMF announcement.

You can find out more about CEVA’s DVS capabilities in their DVS announcement, and there’s more about their ADK in their ADK announcement.

Leave a Reply

featured blogs
Jan 19, 2022
Explore the importance of system interoperability in hyperscale data centers and why it matters for AI and high-performance computing (HPC) applications. The post Why Interoperability Matters for High-Performance Computing and AI Chip Designs appeared first on From Silicon T...
Jan 19, 2022
2001 was famous for some of the worst security issues (accompanied by obligatory picture of bad guy in a black hoodie): The very first blog post of the year covered SolarWinds. See my post The... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Jan 18, 2022
This column should more properly be titled 'Danny MacAskill Meets Elvis Presley Meets Bollywood Meets Cultural Appropriation,' but I can't spell '˜appropriation.'...

featured video

AI SoC Chats: Understanding Compute Needs for AI SoCs

Sponsored by Synopsys

Will your next system require high performance AI? Learn what the latest systems are using for computation, including AI math, floating point and dot product hardware, and processor IP.

Click here for more information about DesignWare IP for Amazing AI

featured paper

Enhancing PSAP Audio Performance and Power Efficiency in Hearables with Anti-Noise

Sponsored by Analog Devices

PSAP enhances user's listening experiences with hearables in challenging environments. Long delay in the audio system creates distortion known as comb effect in PSAP. This paper investigates the root cause of the comb effect and explains how a new anti-noise device yields a superior system performance compared to conventional PSAP solutions.

Click here to read more

Featured Chalk Talk

Easy Hardware and Software Scalability across Renesas RA MCUs

Sponsored by Mouser Electronics and Renesas

There are a bewildering number of choices when designing with an MCU. It can be a challenge to find one with exactly what your design requires - form factor, cost, power consumption, performance, features, and ease-of-use. In this episode of Chalk Talk, Amelia Dalton chats with Brad Rex of Renesas about the small-but-powerful Renesas RA family - a flexible and scalable collection of MCUs that may be exactly what your next project needs.

Click here for more information about Renesas Electronics RA Family Arm® Cortex® Microcontrollers