feature article
Subscribe Now

Samsung’s 50-Mpixel GN2 Image Sensor Sports Pro Camera Features

Could this Sensor Kill Standalone Cameras Completely?

In August, at the Hot Chips 33 conference, held online this year thanks to COVID-19, Sooki Yoon described the Samsung 50-Mpixel GN2 image sensor, which the company announced earlier this year. This sensor is destined to be used in mobile phones such as the Xiaomi Mi 11 Ultra/Pro, according to Yoon’s presentation, but it creates a camera with features that shame those of most compact, mirrorless, and dSLR cameras.

Camera phones have been systematically wiping out the photographic enthusiast camera market as their mobile phone cameras accrete better and better features. The Samsung GN2 sensor may be good enough to put the final stake in the heart of consumer devices that can “only” function as cameras – you know, like cameras.

The Samsung GN2 image sensor is a 2-die sandwich consisting of an image sensor die with 8160×6144 pixels and a die that holds the Analog-to-Digital Converter (ADC) and the image-processing digital circuitry. The pixel die holds the 50 million 1.4 μm pixels and therefore uses larger lithographic geometries to image the circuits on the die. The die containing the ADC and image-processing circuitry uses 28 nm lithography, which is the current economic sweet spot for chip making.

The GN2 sensor has numerous outstanding features including:

  •         50-Mpixel resolution
  •         Advanced DPAF (Dual-Pixel auto Focus) using diagonally sliced phase-detecting pixels
  •         Smart ISO Pro for wide dynamic range
  •         Staggered HDR (High Dynamic Range)
  •         Significant power reduction, thanks to reduced ADC operating voltage

Each of these features deserves its own elaboration, so here it is:

50-Mpixel resolution

Image sensor resolution is always at war with pixel dynamic range. To get more resolution, you need more pixels. Put more pixels on the same size chip and each pixel must become smaller. But the charge wells of smaller pixels hold fewer electrons, which results in noisy shadows.

Samsung’s solution to the competing design goals of more pixel resolution and low image noise is to put the 50 Mpixels on the GN2 sensor on a larger piece of silicon. Figure 1 illustrates the evolution of Samsung mobile phone image sensors culminating in the GN2.

Fig 1: The Samsung GN2 mobile image sensor achieves high resolution with low noise by using an enlarged sensor die. (Image Credit: Samsung)

As Fig 1 shows, the Samsung GN2 image sensor achieves high resolution with low noise by placing larger pixels with larger charge wells on a larger silicon die. Previous Samsung image sensors employed 0.8 and 1.2 μm pixels, while the GN2 image sensor employs 1.4 μm pixels. The result, shown in Fig 2, is greatly increased sensitivity thanks to a 33 percent boost in the maximum number of electrons that can be stored in the pixel charge well, which results in more dynamic range with less visible noise in the image shadows.

Fig 2: The Samsung GN2 image sensor’s 1.4 μm pixel stores 33 percent more electrons than a 0.8 μm pixel, which results in more dynamic range and lower noise in the image shadows. (Image Credit: Samsung)

Advanced DPAF

The top focusing algorithm for cameras at the moment is dual-pixel autofocus (DPAF), which uses a split pixel to create a double image on one imaging pixel. Each imaging pixel consists of two photodiodes that can be used for imaging and for focusing. Fig 3 illustrates how this technique works.

Fig 3: Dual-Pixel Auto Focus (DPAF) projects left and right images on a split pixel and detects when an image is in focus by sensing when the phases of the left and right images match on the split pixel readout. (Image Credit: Samsung)

The DPAF focusing technique projects left and right images on a pixel’s two photodiodes and detects when an image is in focus by sensing when the phase outputs of the left and right photodiodes match during a split-pixel readout. During imaging, the outputs of the left and right photodiodes are combined to produce the image.

DPAF isn’t new. For example, Canon has been using DPAF on its cameras starting with the 70D dSLR, introduced in 2013. However, DPAF can have an Achilles’ heel: if the the two photodiodes in the phase-sensing pixel are optically isolated with a vertical optical wall, then the pixel cannot detect the phase shift of horizontal lines because horizontal lines look the same (have the same phase) when shifted right or left. Samsung’s ingenious solution is to optically split some of the GN2 sensor’s pixels – the green ones – diagonally instead of vertically. To be clear, the green photodiodes are vertically split electrically and diagonally split optically, as shown in Fig 4.

 

Fig 4: Earlier dual-pixel phase detection directed light to the left and right phase-detecting pixel photodiodes using vertical Deep Trench Isolation (DTI). The Samsung GN2 sensor uses a diagonal DTI structure to allow the two photodiodes in a green pixel to detect and focus on both horizontal and vertical lines. (Image Credit: Samsung)

Diagonal optical isolation allows each green photodiode to respond to phase differences for both horizontal and vertical lines, which gives the camera better DPAF capabilities. Fig 5 illustrates the difference between vertically isolated and diagonally isolated phase-detection subpixels. In the image on the left, the vertically isolated DPAF sensor cannot focus in any zone because there are only horizontal lines in the zone. In the image on the right in Fig 5, the sensor can focus on all of the horizontal lines in each zone.

Fig 5: Vertically isolated phase-detection subpixels cannot focus on horizontal lines because shifting these lines right or left does not shift the phase of the resulting signal in the photodiode. A diagonally isolated, phase-detecting pixel can detect horizontal lines. (Image Credit: Samsung)

Smart ISO Pro

Each group of four pixels in the Samsung GN2 sensor has a separate dual gain control switch to adjust the ISO sensitivity of the pixel group. For brightly lit images, the sensor turns the gain of the pixels down to match the full voltage span from the pixel’s image signal to the full input range of the pixel ADC (Low ISO mode). For low light, the sensor boosts the gain of the pixel group so that the full input range of the ADC covers just the low end of the luminance range (High ISO mode), as shown in Fig 6.

Fig 6: The Samsung GN2 image sensor can operate in High ISO mode for dimly lit scenes, to achieve lower noise, and switches to Low ISO mode for brightly lit scenes. (Image Credit: Samsung) 

Staggered HDR

High Dynamic Range (HDR) is a relatively new imaging technique that “appears” to extend the camera’s dynamic luminance range. It’s a compression technique that merges two or more images taken at different sensor sensitivities to accommodate both the brightly lit and dimly lit portions of an image. It helps to bring out details in the shadows of high-contrast scenes. Because HDR techniques use multiple images of the same scene to achieve the effect, HDR can result in blurry scenes if the separate images don’t overlay perfectly. Image stretching algorithms can fix some of the blur, but the best way to reduce blur is to reduce the time between the captures of the multiple images.

Staggered HDR is a way to reduce blur by taking three different exposures in the time normally used to take one image. Samsung says this technique makes use of a rolling shutter, which allows the image sensor to take a long exposure, followed by a medium exposure, and then a short exposure, for dim, moderate, and bright imaging respectively, and then output those three images with some overlap. Using the rolling shutter, the sensor first takes the long exposure and starts to output the exposure a row at a time, with all pixel voltages in a row transferring simultaneously from the pixel row to the ADC.

Before all rows of the long exposure have been fully output, the sensor takes the medium exposure and starts to output it before the long exposure has been fully output. It then does the same for the short exposure. The sensor uses virtual MIPI channels to output the three exposures with some overlap, and the GNR sensor’s image-processing die weaves the images together to produce the HDR image. If this all seems confusing, perhaps Fig 7 can clear up the sequence of events.

Fig 7: To reduce blur in Staggered HDR mode, the Samsung GN2 sensor takes a long exposure for dimly lit areas and starts to output that image on one virtual MIPI channel (VC0). Before that long-exposure image is fully output, the sensor takes a medium-exposure image and starts to output that image on a second MIPI virtual channel (VC1). It then takes a short exposure for brightly lit areas and outputs that image on a third MIPI virtual channel (VC2). (Image Credit: Samsung)

As Fig 7 shows, the sensor can simultaneously send the autofocus information (marked “AF” in the figure) generated by the DPAF feature on a fourth MIPI virtual channel (VC3).

Significant power reduction

Mobile phones constantly battle Battery’s Law: battery energy capacity does not grow nearly as fast as do silicon-driven capabilities. Consequently, there’s a continuing need to cut power consumption wherever possible. The designers of the Samsung GN2 image sensor attacked one obvious power hog: the high-speed ADC used to convert analog pixel voltages to digital representation. Dropping the ADC’s supply voltage from 2.8 V to 2.2 V cut the ADC’s power consumption by more than 20%, from 311 to 244 mW.

However, merely cutting the supply voltage would have reduced the ADC’s input range to the point where it could not accept the full range of pixel image voltages from the sensor. To combat this problem, Samsung’s engineers added a -0.6 V substrate back bias to the pixel image sensor die, effectively level shifting the sensor’s output range, and they developed a low-threshold-voltage transistor design for use in the ADC circuitry on the 28 nm die. The combination of these two innovations reduced operating power while preserving the full sensor exposure range.

Yoon discussed other innovations designed into the Samsung GN2 image sensor during his presentation at Hot Chips 33, but the features listed above were the ones that really piqued my interest, as a design engineer and as a long-time camera enthusiast who owns several digital cameras. Samsung’s engineers packed a massive amount of technological innovation into this sensor, and I believe it really raises the capability bar for all cameras, in a mobile phone or otherwise.

2 thoughts on “Samsung’s 50-Mpixel GN2 Image Sensor Sports Pro Camera Features”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Nov 22, 2024
I just saw a video on YouTube'”it's a few very funny minutes from a show by an engineer who transitioned into being a comedian...

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Versatile S32G3 Processors for Automotive and Beyond
In this episode of Chalk Talk, Amelia Dalton and Brian Carlson from NXP investigate NXP’s S32G3 vehicle network processors that combine ASIL D safety, hardware security, high-performance real-time and application processing and network acceleration. They explore how these processors support many vehicle needs simultaneously, the specific benefits they bring to autonomous drive and ADAS applications, and how you can get started developing with these processors today.
Jul 24, 2024
91,816 views