editor's blog
Subscribe Now

Crappy + Crappy = Not So Bad

We’ve all seen some of the crappy pictures that cell phones have allowed us to take and distribute around the world at lightning speed. (Is there such a concept as “photo spam” – legions of crappy pictures that crowd out the few actual good ones?).

Now… let’s be clear: much of the crappiness comes courtesy of the camera operator (or the state of inebriation of the operator). But even attempts at good composition and topics of true interest can yield a photo that still feels crappy.

Part of the remaining crappiness is a function of resolution: phone cameras traditionally have had less resolution than digital SLRs. So we up the resolution. And, frankly, phone resolution is now up where the early digital SLRs were, so the numbers game is constantly shifting as we pack more pixels into less space on our imaging chips.

But that comes with a cost: smaller pixels capture less light. Because they’re smaller and have fewer impinging photons. So higher-res chips don’t perform as well in low-light situations. (Plus, they traditionally cost more – not a good thing in a phone.)

There is an alternative called Super Resolution (SR), however, and to me it’s reminiscent of the concept of dithering. I also find the name somewhat misleading: it isn’t a super-high-res camera, but rather takes several low-res images and does some mathematical magic on them to combine them into a single image that has higher resolution than the originals. Like four times the resolution. It’s part of the wave of computational photography that seems to be sweeping through these days.

The way it works is that the camera takes several pictures in a row. Each needs to be slightly shifted from the others. In other words, if you take a static subject (a bowl of fruits and flowers) and put the camera on a tripod, this isn’t really going to help. One challenge is that, with too much shifting, you can get “ghosting” – if a hand move between shots, for example, you might see a ghosty-looking hand smeared in the combined version.

It’s been available as a post-processing thing on computers for a while, but the idea now is to make it a native part of cameras – and cameraphones in particular. Which is good, since I can’t remember the last time I saw someone taking a still life shot with a phone on a tripod. (Besides… fruits don’t do duckface well.)

In this case, the slight shaking of the holding hand may provide just the movement needed to make this work. But, of course, you need the algorithms resident in the phone. Which is why CEVA has announced that it has written SR code for its MM3101 vision-oriented DSP platform. They claim that this is the world’s first implementation of SR technology for low-power mobile devices.

Their implementation allows this to work in “a fraction of a second.” Meaning that it could become the default mode for a camera – this could happen completely transparently to the user. They also claim that they’ve implemented “ghost removal” to avoid ghosting problems (making it less likely that the user would want to shut the feature off… although for action shots? Hmmm…).

You can get more detail in their release.

Leave a Reply

featured blogs
Dec 2, 2020
The folks at LEVL are on a mission is to erase your network footprint. Paradoxically, they do this by generating their own 48-bit LEVL-IDs for your devices....
Dec 2, 2020
To arrive at your targeted and optimized PPA, you will need to execute several Innovus runs with a variety of design parameters, commands, and options. You will then need to analyze the data which... [[ Click on the title to access the full blog on the Cadence Community site...
Dec 1, 2020
UCLA’s Maxx Tepper gives us a brief overview of the Ocean High-Throughput processor to be used in the upgrade of the real-time event selection system of the CMS experiment at the CERN LHC (Large Hadron Collider). The board incorporates Samtec FireFly'„¢ optical cable ...
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...

featured video

Available DesignWare MIPI D-PHY IP for 22-nm Process

Sponsored by Synopsys

This video describes the advantages of Synopsys' MIPI D-PHY IP for 22-nm process, available in RX, TX, bidirectional mode, 2 and 4 lanes, operating at 10 Gbps. The IP is ideal for IoT, automotive, and AI Edge applications.

Click here for more information about DesignWare MIPI IP Solutions

featured paper

Learn how designing small is easier than you think

Sponsored by Texas Instruments

Designing with small-package ICs is easier than you think. Find out how our collection of industry's smallest signal-chain products can help you optimize board space without sacrificing features, cost, simplicity, or reliability in your system.

Click here to download the whitepaper

Featured Chalk Talk

Microchip PIC-IoT WG Development Board

Sponsored by Mouser Electronics and Microchip

In getting your IoT design to market, you need to consider scalability into manufacturing, ease of use, cloud connectivity, security, and a host of other critical issues. In this episode of Chalk Talk, Amelia Dalton sits down with Jule Ann Baker of Microchip to chat about these issues, and how the Microchip PIC-IoT WG development board can help you overcome them.

Click here for more information about Microchip Technology PIC-IoT WG Development Board (AC164164)