Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
Feature Articles

This is how Google’s Night Sight works on your Pixel phone

By Koh Wanzi - 16 Feb 2019

This is how Google’s Night Sight works on your Pixel phone

Last year, Google rolled out Night Sight for its Pixel phones, including the Pixel 2 and 2016's Pixel. First announced alongside the Pixel 3 phones at Google's October hardware event in New York, Night Sight is basically a dedicated night mode that boosts image quality using Google's computational photography smarts.

In a nutshell, it's basically night photography on steroids. Google says its goal was to improve photos taken with lighting between 3 and 0.3 lux, which is the difference between a sidewalk lit by street lamps and a room so dark you can't find your keys on the floor. What's more, it does this using the Pixels' single camera and no LED flash.

Image Source: Google

 

It captures multiple frames

To start off, Night Sight uses positive-shutter-lag, or PSL, which waits until after you press the shutter button before it starts capturing images. This is in contrast with the default picture-taking mode on Pixel phones, which uses a zero-shutter-lag (ZSL) protocol and begins capturing frames once you open the camera app. PSL requires that you hold still for a short time after pressing the shutter, but it allows for longer exposures and thus improves the signal-to-noise ratio at much lower brightness levels.

 

It adjusts for shaky hands

However, longer exposure times can lead to motion blur. Beyond the optical image stabilization on the Pixel 2 and 3, Google uses something called motion metering, which looks at the phone's movement, the movement of objects in the scene, and the amount of light available to decide on the right exposure time to minimize motion blur. This means that if the phone is being stabilized on a tripod for example, the exposure for each frame could be increased to as much as one second.

Ultimately, the number of frames and exposure times depends on the Pixel model you have (the first Pixel doesn’t have OIS, so exposure times are shorter), how much your hand is shaking, scene motion, and scene brightness. To summarize, Night Sight can capture anything between 15 frames of 1/15 second or less each and 6 frames of one second.

 

It aligns and merges different frames

Google then aligns and merges the frames it's captured to further reduce image noise, and all this happens within a few seconds on the phone. The idea of averaging frames to reduce image noise is an old one, and the Pixel 3 leverages a tweaked version of Google’s Super Res Zoom technology to average multiple images together. However, the Pixel and Pixel 2 use a modified version of HDR+’s merging algorithm, which is better able to detect and reject misaligned pieces of frames than the regular HDR+ algorithm. 

Super Res Zoom produces better results than the HDR+ algorithm, but it can't be implemented on the older Pixel phones because it requires the Pixel 3's faster processor.

The scene is nearly pitch black without Night Sight.

Here's what it looks like with Night Sight on.

 

It learns how to correctly color a scene

Cameras are able to adjust the colors of images to compensate for the dominant color of illumination, a process known as auto white balancing (AWB). They effectively shift the colors in the image to make it seem as if a scene is lit by neutral white light. However, this process breaks down in very dim lighting, which is why Google developed a learning-based AWB algorithm to cope.

This algorithm is trained to tell between an image with good white balance and a poorly balanced one. In the latter case, the algorithm can then suggest how to shift its colors to make the illumination appear more neutral. This sort of training requires photographing a wide range of scenes using Pixel phones, then hand-correcting their white balance while looking at the photo on a color-calibrated monitor.

 

It tone maps scenes that are too dark to see

Image Source: Google

One of the most distinctive features of the first Night Sight reviews has been how it’s been able to illuminate scenes that are seemingly nearly pitch black when captured using the normal camera mode. Google does this with tone mapping, which is basically mapping one set of colors to another in order to brighten shadows while still maintaining the overall brightness.

Humans stop seeing color in very dim lighting because the cone cells in our retinas stop functioning. Rod cells still work, but they can't distinguish between different wavelengths of light, which is how we perceive color. The rod cells also have poor spatial acuity, which is why objects seem less distinct at night. Night Sight aims to overcome these weaknesses, and the resulting pictures are both sharp and full of color. 

Image Source: Google

A high-end DSLR can make an image captured at night seem like day with long enough exposure times, but that’s probably not the effect you want. To deal with this, Night Sight also employs an S-curve into tone mapping, which increases contrast while still cloaking the scene in shadow. In this manner, you can see what’s going on in the scene, but you can still tell that it was taken in the dark. 

Having said that, photos shot at night on the Pixel 3 still sometimes uncannily look like day, thus illustrating how tricky the process really is.

Loading...