Aug 29, 2013

Stable Video, Easier Development

posted by Bryon Moyer

There are times when a shaky video can be just the thing. Imagine: where would the Blair Witch Project have been without it? What would an entire generation of hipsters do without the ability to shake (ironically) their video? How are happening new producers supposed to attract new audiences without being able to make their video look shoddy and unprofessional?

But, aside from those times, shaky video is not so good. In fact, for a lot of us, even these examples aren’t great (some of them representing videos that only their producers could love). We all have a hard enough time holding our cameras still without having to add shaking as part of some obscure production value.

We already take care of this for still pictures using optical image stabilization (OIS); why not simply apply that technology to videos? Well, for the same reason you don’t use a still photo camera to shoot video. Oops! Wait… I guess these days you can. OK, then simply because video isn’t the same as a still picture, so what works for one doesn’t necessarily work for the other.

First of all, still photos are just that: still. Any movement is wrong (except for things like sports, I suppose). So, in theory, you can just neutralize any motion, period. But, as you can imagine, that’s not going to work with video – unless you want to turn your video into a still shot. No, video means motion by definition. So when stabilizing a video image, you have to figure out what motion is intended and what motion is due to unwanted shaking.

Unlike OIS, which is typically implemented as hardware embedded into the camera module, CEVA is proposing handling digital video stabilization (DVS) in software for things like smartphones, wearable electronics, and cameras mounted inside moving vehicles, all of which involve inherent shaking. And all of which benefit from low power – wearable cameras in particular.

So CEVA has put together a set of DVS functions optimized to work on the CEVA-MM3101, their imaging-oriented DSP platform. These functions come with a number of options and parameters, since one solution doesn’t necessarily solve all the problems optimally. And with no standards out there, they see this as an opportunity for their customers to differentiate their camera solutions.

Other reasons for using a programmable solution instead of a hardware version are the fact that the DSP hardware can be reused for other functions – or can combine DVS with other functions like Super-Resolution.

Their solution provides correction along four axes: the x/y/z directions and then one angular direction: roll (which they call rotation about the Z axis, so I guess the convention is that forward is along the Z axis). They also provide correction for the “Jello effect”: this is distortion that occurs due to a shutter that’s rolling while the image or camera is moving, causing something of a relativistic leaning effect. And they can scale to handle 4K Ultra HD on a single core and adapt to various lighting conditions.

But power is also critical: they say that existing solutions use around 1 W of power; they’re touting less than 35 mW for 1080p30 video when implemented on a 28-nm process.

This new set of libraries could be integrated into an application using their new Application Development Kit (ADK), which they announced at the same time. The ADK is a framework for easing application development and optimization.

One noted feature is called SmartFrame. This allows a developer to operate on an entire video frame while the underlying framework takes care of logistical details. In particular, it can tile up the frame and apply “tunneling” to multiple algorithms, which they refer to as “kernels.”

This tunneling combines a pipeline architecture with the ability to chain multiple kernels together for back-to-back execution. Without tunneling, each kernel would be called by a program, and execution would return to the calling function after the kernel completed so that the program could then call the next kernel.

Instead, the framework allows the first kernel to work on one tile and then hand that tile directly off to the next kernel in the chain while the first kernel starts work on the second tile. And so forth for additional tiles and kernels. This minimizes the amount of data copying needed, and control doesn’t return to the calling program until the entire frame has been processed by all of the kernels.

The ADK also makes it possible to call DSP offloads from CPU programs, something we saw with CEVA’s AMF announcement.

You can find out more about CEVA’s DVS capabilities in their DVS announcement, and there’s more about their ADK in their ADK announcement.

Tags :    0 comments  
Aug 28, 2013

Two Ways to Tune an Antenna

posted by Bryon Moyer

We’ve looked at a couple of companies focusing on improving the performance of cell phone antennas in real time as conditions change. WiSpry (MEMS) and Peregrine (SOS CMOS) were two such examples. But Cavendish Kinetics came into the picture as well, and it turns out that there’s another layer of nuance as to what these companies do.

According to Cavendish, there are two ways to improve antenna performance: tune the impedance and tune the frequency. In the former case, you have an antenna that has to work with multiple frequencies, but is not specifically optimized for all of those frequencies. But as conditions or utilized bands change, the impedance matching may not be optimal. So companies like WiSpry and Peregrine provide capacitor networks that allow real-time tweaking of the impedance to reduce signal loss.

But Cavendish Kinetics claims to be doing something different: the capacitor arrays they create aren’t for adjusting the impedance; they’re for re-centering the frequency of the antenna. While they say that impedance tuning can improve the signal by 20% or so, they claim that they can get a 2X improvement in signal strength simply by tuning the antenna to whichever frequency is in use at a particular time.

We’ll look more at the specifics of how they create their capacitor arrays in a future story, but that’s secondary to the fact that they’re actually trying to solve a different problem than folks that, on the surface, would appear to be doing the same thing.

Tags :    0 comments  
Aug 26, 2013

A Different Spin on Job Loss

posted by Bryon Moyer

In a discussion with Teledyne DALSA about their MIDIS MEMS process, we spent a few moments discussing how the ASIC die and the MEMS die are mated together. With this technology, the MEMS die has landing pads and the ASIC die gets micro-bumped and flipped and mated to the landing pads.

The question was whether this was done wafer-to-wafer or using known-good dice. The answer was wafer-to-wafer, since yield allows it and the costs are much lower. All pretty much reasonable reasoning.

But then we turned into somewhat more surprising territory. The reason it’s cheaper is that it’s a whole lot easier for a robot to take a wafer, invert it, align it, and stick it onto the receiving MEMS wafer. If you take a known-good-dice approach, then you first have to test the ASIC wafer to figure out which ones are good, then saw the thing up, and then pick out the good dice. You then have to place them on the waiting MEMS dice (which would presumably still be in full wafer form), placing them only on MEMS dice that have been shown to work by whatever testing could be done at the wafer level.

This is a lot of work and requires much more worker intervention than the robotic wafer-to-wafer process. More specifically, it requires more workers. Which costs more. We’re used to casting aside jobs with technology because, in the emotion-and-ethic-free world of finance, the dollar (or your favorite currency) is king and is all that matters. If jobs suffer while I make more money, it’s not my problem (because it’s not my job suffering).

It was as if they wanted to address this potential conscience twinge that they went one step further to justify the fewer-workers approach, and it went like this: These things are assembled in Southeast Asia. Southeast Asia has a bad reputation for employing child laborers. So by eliminating the jobs, we reduce the problem of child labor.

Bet you didn’t see that one coming! Nice to know we’re doing something good for the world…

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register