feature article
Subscribe Now

Imaging Without Lenses

Apparently That Makes Sense

A simple-sounding-yet-bizarre notion has been frequenting certain optical research spots. It’s the concept of a “lens-free” imaging system. I’ve found no mentions that bother to explain exactly what that means.

Perhaps that’s because it should be obvious: you need no lens. Duh…

You know, it’s been a long time since I studied optics in college physics. And once my career was plotted on the digital scale, my mathematical tendencies have been spoiled by a solution space that gives you a 50:50 chance of being right just by guessing. Things are either 1 or 0. (OK, or Z or X. Fine. Happy?)

Dealing with a continuum no longer computes for me, even though most of the world has to do hard math like integrals that have no symbolic solution. Messy messy. So explanations that immediately jump into bizarre (to me), specialized math are theoretically within my grasp, but the effort to plumb it all has typically been more than I have wished to expend. I’m an intuitive kind of guy, and I’d be happy generally knowing what’s going on; I don’t have to be able to calculate it.

After all, from the days when we were mere tots, we’ve learned that cameras record images for later viewing, and cameras have lenses. (That was more obvious in the past, when you had to manually focus things; perhaps that notion is foreign to the current young generation…) And in physics class we learned that lenses help to resolve images onto the plane of film so that they come out nice and crisp.

So… this lens-free thing: Are you saying we really didn’t need those lenses all along? Anybody want to address this apparent contradiction? Doesn’t anyone else see it? I feel like I’m taking crazy pills!

Imec and Leti both dabble in this technology. While visiting Imec, I asked if I could get a better explanation; they referred me to some work being done at UCLA – which we’ll return to. None of the (admittedly few) people I talked to were willing to sit and explain it. So it was left to me to try to sort this out on my own. Just me and my friend Google. Just setting expectations here…

I’m not going to say that I’ve got it all nailed now. I have a better sense, but, well, I feel like I’ve successfully gotten myself to the top of a thin mountain saddle – and the slightest puff of wind could blow me back down. So I wanted to share the results of my interwebular meanderings to anyone who might benefit from the use of this technology save for confusion similar to mine. And to open a conversation for any contributors that might want to bolster or fill in any holes in the discussion.

I’ve found two big conceptual differences between lens-free imaging and traditional photography. The first is critical: this isn’t photography – it’s holography. Which dumps me deep into territory I’ve never been comfortable with.

Having been forced to wrestle with my holographic shortcomings, I’ve managed a few “ahas.” The main thing is that holography isn’t about recording an image. It’s about recording information about the diffraction of light in such a way that the image can be recreated. This is done with interfering “lasers.”

That’s still a rather squishy concept for me. I can go back and read the details again and think, “Oh yeah, that’s right… got it.” And that has a half-life of, oh, about a day or two. If sober.

But the critical distinction is that a photograph records an image for “later playback.” That playback consists of shining a light on the image and watching how the impinging light is reflected or absorbed. By contrast, holography is about literally recreating the image in real time.

That’s a bit of a tenuous concept for me, so I came up with an analogy. (I’d be dead without analogies.) Let’s look at music instead. Let’s say we have an electronic keyboard on which you can play lots of music by depressing the keys in accordance with your favorite song. You’re creating the music right then. But let’s say that this MIDI-capable keyboard also has keys that allow you to play pre-recorded samples.

When you play back something pre-recorded, it’s like a photograph. It’s a recording of something that happened in the past. But there’s another way you can “pre-record” music for playback: you can play the song and have your keystrokes recorded. MIDI lets you do this.

At some future time, you can run that “script” back through the keyboard, and it will play what you played without your having to do it all over again, like a player piano. The critical distinction is that, unlike the pre-recorded playback, MIDI lets you re-create the music in real time just the way you played it before. It’s not a recording of what you played before; it is what you played before, except that it’s being played now. Automatically; not manually by you.

Holography seems to me like that. While photography provides a recording of some image from the past, holography recreates that actual image in real time.

Woo woo, profound, I know. I’ll give myself you a minute to catch my your thoughts.

The second critical distinction is that lens-free imaging is strictly a 1:1 magnification deal. Think macro photography. (But, as we’ll see, typically used on a micro scale.) One of the things a lens does is to bring light from a wide angle, reflected off of various items in view within that angle, into focus on a relatively small area. No lens? Then that’s not going to happen. Yes, depending on the light source, you may have photons impinging from many directions, but they won’t create the image you might expect. In fact, lens-free microscopy works better if all the light comes in straight, with few/no stray rays.

So the key here is that a lens-free camera isn’t going to give you a nice Christmas card to send to the in-laws. Unless you don’t like your in-laws; then you could send it just to mess with them. It’s going to give you a fringey “whoa, I need to lay off the – damn, was that really egg nog??” image.

Now, some lens-free images might be processed in real time; others could be recorded as holograms for later “playback.” And we all know what those holograms look like, although they can be a bit poor when viewed in the chaotic, incoherent light within which we spend our days. The biggest artifact you might notice is the false coloring, the fringey, rainbowey effect that might be nice if you’re out hunting garden gnomes or unicorns, but can confuse an honest scientist trying to understand what something really looks like. Especially when color is used as a label to signal something important, like a tagged cell.

Which brings us back to Lala Land, where one Dr. Ozcan has a team doing research in this area at UCLA. See? If it were simple, we wouldn’t have great minds still trying to figure it out. I keep telling myself that.

As an example of some of the work being done, they’ve found a way to combat some of this fringey coloring. They start by taking their RGB image and converting it to YUV. That separates out intensity (Y) from color (U, V) – originally done so that color TV signals could be sent alongside the pre-existing black-and-white signals.

First, a couple of details that help to reinforce the fact that this is not ordinary photography. In an effort to control the incoming light quality and restrict the range of incoherent light, the “laser” light source is restricted to 6 cm from the imager. That’s like 3 inches for us Merkans. Now… normally, we image by reflecting light off an objective, but that makes all kinds of messy rays going in all kinds of directions.

No, we want straight rays – limited, in their case, to ±3° off of normal incidence. If you’re going to have straight rays, that means… what, you place the light in front of the objective? Of course not, silly. You place it behind the objective – and the image has to be somewhat transparent (or, more literally, translucent) so that the light is transmitted through the objective – and it interferes with unobstructed light to create the diffraction image.

This clearly starts to limit the utility of this technology to microscopy. Before, we said it’s useful for that, but, at present, it seems more or less useful only for that. I don’t mean that you have to have someone in a lab coat looking at a slide for this to work, but, so far, anyway, based on these restrictions, it’s all about imaging very small things. Like, for instance, discriminating blood cells in Imec’s cell sorter.

OK, back to the color problem. The UCLA team wanted to get rid of the rainbow (jeez that sounds evil) while maintaining high resolution. They could correct the image by averaging colors over a 13-µm window, but, by doing that, you specifically lower the resolution.

Instead, by moving to the YUV space, they could go ahead and average the colors in the UV dimensions, with the attendant loss of resolution. But they picked one color – green – to use as the Y channel, and they kept that at high resolution. By putting them back together again, the Y channel was able to replace the resolution lost through the averaging, and they got an improved high-resolution color image. They then improved contrast by removing the color component from any areas that had intensity below a specified threshold.

So that gives you something of a flavor of what’s going on with lens-free imaging. Like I say, I’m not claiming to have it down pat, but it makes more sense to me now than it did before. Feel free to add your pearls of wisdom for all of our benefit.

5 thoughts on “Imaging Without Lenses”

  1. Pingback: pax 3 dhgate
  2. Pingback: DMPK Studies
  3. Pingback: jocuri friv

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

SiC-Based High-Density Industrial Charger Solutions
Sponsored by Mouser Electronics and onsemi
In this episode of Chalk Talk, Amelia Dalton and Prasad Paruchuri from onsemi explore the benefits of silicon carbide based high density industrial charging solutions. They investigate the topologies of Totem Pole PFC and Half Bridge LLC circuits, the challenges that bidirectional CLLC resonant DC-DC converters are solving today, and how you can take advantage of onsemi’s silicon carbide charging solutions for your next design.
May 21, 2024
37,618 views