Today’s image sensors—the ones used in digital photography, surveillance, and machine vision—are amazing, awesome, and [insert your favorite superlative here]. Even more remarkable is the fact that they achieve all this while effectively “throwing away” around 70% of the incoming light. Can you imagine how much more amazing, awesome, and [your superlative here] they could be if they used 100% of the light instead? Well, now they can!
I don’t know about you, but I seem to experience things in waves, especially when it comes to topics of a technological twist. Currently, I’m surfing merrily along on the crest of an “image sensor” wave, for example, because developments in these waters are coming thick and fast.
Before we plunge headfirst into the heart of this column, it will be worth our taking a few moments to set the scene. As I wrote in my paper on color vision:
What we refer to as “light” is simply the narrow portion of the electromagnetic spectrum that our eyes can see (detect and process), ranging from violet at one end to red at the other, and passing through blue, green, yellow, and orange on the way (at one time, indigo was recognized as a distinct spectral color, but this is typically no longer the case).
When we use terms like “red,” “yellow,” “green,” and “blue,” these are just labels that we have collectively agreed to associate with specific sub-bands of the spectrum. Just outside the visible spectrum above violet is ultraviolet, the component of the sun’s rays that gives us a suntan (and skin cancer). Similarly, just below red is infrared, which we perceive as heat.
In the ideal case, white light is a mix of all visible wavelengths of light (from red to violet) at equal or balanced intensity, such that the human eye perceives it as white. This is from a theoretical light source emitting a uniform spectral power distribution across the visible spectrum.
Sunlight is not perfectly white, but it’s very close. To the human eye, sunlight looks white under typical daylight conditions, especially when the sun is high in the sky. The way this works is that our eyes and brains adapt to the overall color of ambient light (a process called chromatic adaptation). Even though sunlight has more green and yellow (which explains why our eyes are so sensitive to green), our visual systems balance the colors, so we perceive it as white.
In practice, most real-world white light sources do not emit all colors equally; instead, they approximate white. For example, computer monitors and television screens represent white light by combining red, green, and blue pixels at specific intensities. This looks white to us, but the spectrum is actually just three spikes, not a continuous mix (this is called metamerism: different spectra can look like the same color to human vision).
One more thing before we start is the concept of “primary colors,” which refers to a set of three or more colors that form the basis for defining a color space. By combining these primaries in different proportions, we can represent a wide range of colors within that space. One way to look at this (no pun intended) is that a primary color is a color that can’t be made by mixing other colors, but you can mix it with other primary colors to make many other colors. In additive color systems (like screens), the primary colors are typically Red, Green, and Blue (RGB). A helpful way to visualize things is by means of a color wheel, as shown below (this image assumes 8-bit fields for the RGB values, and these values are shown in hexadecimal).
Additive color wheel (Source: Clive “Max” Maxfield)
A key point to note here (one whose import will become apparent later) is that each color has a complementary value located 180° around the wheel. So red and cyan are complementary to each other, as are blue and yellow, for example.
A classic use for image sensors is in smartphone cameras. In fact, according to a 2024 report by Deloitte, camera quality is the most important feature for consumers when purchasing a new smartphone. The interesting point here is that, using Apple iPhones as an example, although the quality of the images has improved over the years, the market saw the same 12-megapixel (MP) resolution from 2015 to 2024, as illustrated below.
Resolution stayed the same, so how did camera quality improve? (Source: eyeo)
So, if the resolution stayed the same, how did image quality improve? I’m glad you asked. It’s because they used bigger sensors with bigger pixels, which resulted in more light being collected, which resulted in better picture quality.
Note that the 4x 12MP associated with the 2024 phone reflects the fact that this sensor has 48MP, but groups of 4 adjacent pixels are combined into one “super pixel.” This creates a higher quality 12MP image because more light is gathered per virtual pixel, resulting in better low-light performance. In a crunchy nutshell, more light equals better pictures, as depicted below.
More light = better pictures (Source: eyeo)
The reason I’m waffling on about all this here is that I was just chatting with Jeroen Hoet, who is CEO at eyeo. The company name stands for “eye-opening,” which is certainly descriptive of their technology. Their claim to fame is to be able to create image sensors that capture 3x more light than standard sensors. This means you can capture much higher quality images while keeping the same size sensor, or you can capture the same quality images as a regular sensor while shrinking to 1/3 of the size. Since the camera is ~20% of the cost of a smartphone, this could be a market disruptor.
So, how can this magic become manifest? Well, a typical image sensor is composed of two main parts: an array of detector elements (a.k.a. picture elements or “pixels”) on the “bottom” and an array of color filter elements on the “top,” as depicted in (a) below.
Standard RGB filters vs. eyeo’s YB and RC splitters (Source: Clive ‘Max’ Maxfield)
In reality, these sensors can contain millions of pixels. However, for the sake of simplicity (and for the sake of making my drawing life simple), we will consider only an 8 x 8 array, as shown in (a) above.
Image sensors (like CCD, CMOS, or SPAD) can detect only the brightness (intensity) of light, not color. To capture color, each pixel is covered by a color filter that lets in only one color of light: red, green, or blue. (For more on these three sensor types, see my column, Will We Soon Say: ‘CMOS Sensors Drool, SPAD Sensors Rule?’).
Observe that the RGB filters are arranged in a Bayer pattern (or Bayer filter mosaic). This uses a 2×2 grid (as shown below) that repeats across the sensor.
In this case, the detector “sees” 25% of the incoming red light, 25% of the blue light, and 50% of the green light (because human vision is most sensitive to green).
Since each filter pixel removes ~2/3 of the incoming light, each detector pixel ends up “seeing” only ~1/3 of the incoming light (sad face). The image is subsequently reconstructed by interpolating the missing two colors at each pixel in a process known as demosaicing.
And so we come to the folks at eyeo, who are pioneering the next generation of image sensors through their proprietary color splitting technology. Their innovative approach utilizes nano-photonic structures to split the light based on its constituent frequencies, without absorbing any of the light.
As depicted in (b) in the image above, they have two types of splitters. One splits the incoming light into yellow and blue (YB); the other splits the incoming light into red and cyan (RC). Remember that, as per our earlier color wheel discussions, yellow and blue are complementary to each other, as are red and cyan.
The way I like to visualize this (the truth may be stranger) is that each of their splitters straddles two of the detector’s pixels, efficiently directing different wavelengths to the appropriate sensor pixels without the losses associated with conventional filters.
This takes a bit of wrapping one’s brain around, but it’s not so bad once you get the hang of it. The point is that each pair of pixels receives the entire incoming light, just split spectrally. Consider the color composition:
- Yellow = Red + Green
- Cyan = Green + Blue
As a result, each pixel pair captures overlapping information but with more luminance efficiency than traditional RGB filters.
For those who swim in the image processing pipeline waters, there’s one more point that’s worth pondering, which is that YUV is a color space designed to separate luminance (brightness = Y) from chrominance (color = U and V components).
- Y (luma) is (sort of) a weighted sum of R, G, and B.
- U and V represent the blue-yellow and red-cyan axes (essentially color difference channels).
This aligns remarkably well with eyeo’s new splitter scheme!
- The Yellow-Blue pair encodes a Blue–Yellow axis, just like U.
- The Red-Cyan pair encodes a Red–Cyan axis, similar to V.
So, while not exactly the same as traditional YUV encoding (which uses specific math formulas based on RGB), eyeo’s splitter scheme mimics YUV-like separation of channels, but in a more photon-efficient fashion (no light is wasted in color filters, improving low-light performance and signal-to-noise ratio).
This splitter technology can be fabricated on top of CCD, CMOS, or SPAD detectors as part of the wafer production process (just like standard RGB filters), so it’s of interest to every camera sensor manufacturer on the planet. Similarly, the fact that it improves the images that are used both by humans and machine vision systems is of interest to anyone creating robots, autonomous vehicles, surveillance systems, and the list goes on.
As usual, I, for one, am very impressed. This is the sort of thing I would never have thought up myself in 1,000 years. I’m also excited by how fast things are moving. The folks at eyeo incorporated the company in 2024. They received 15 million euros of seed investment earlier this year, and they are already involved in lead customer engagements. All I can say is that I see a very bright (no pun intended) future for eyeo. What say you? Do you have any thoughts you’d care to share on anything you’ve read here?