Obsessed with technology?
Subscribe to the latest tech news as well as exciting promotions from us and our partners!
By subscribing, you indicate that you have read & understood the SPH's Privacy Policy and PDPA Statement.
Feature Articles

How computational photography is letting smartphones perform like ILCs

By Marcus Wong - 21 Mar 2019

How computational photography is letting smartphones perform like ILCs

 

When a taking a photo is more than just recording an image

Computational photography is the capture and processing of images involving the use of digital computation on top of the regular optical processes. Simply put, instead of just recording a scene as the sensor captures it, computational photography also uses the information gathered from other hardware and sensors to fill in details that have been missed out. Here are three applications of this technique that you see in the latest smartphones today.

 

Seeing in the dark with Google

The problem with trying to capture images in low light with digital sensors is that you get image noise that results in artefacts and random spots of color in the image. Every camera suffers from this because in low light, the number of photons entering the lens varies greatly.

Traditionally, we counter this by letting more light hit the sensor for each exposure. Placing the camera on a tripod works, but you’ll then need your subject to hold still enough for the capture. To overcome this problem, Google uses multiple short exposures and the optical flow principle to calculate the optimal time for each exposure.

Other than dealing with noise, Google's Night Sight also uses machine learning to let the camera learn how to shift the color balance for troublesome lighting. A wide variety of scenes were captured with Pixel phones, then hand-corrected for proper white balance on a color-calibrated monitor to serve as a base database for the algorithm to draw from.

To prevent the camera from doing too good a job – making a night scene look like it was shot in the day for example – Google’s engineers introduced multiple S-curves for adjusting light levels over various regions of the image instead of applying a global adjustment, thus keeping the tone mapping accurate.

 

Going beyond optical zoom like Huawei

Like every other smartphone on the market, Huawei’s Mate 20 Pro doesn’t come with a zoom lens. However, the Mate 20 Pro is still able to give images with greater detail and less artefacts than what you’d get from simply cropping the original picture thanks to its Hybrid Zoom technology.

You see, Hybrid Zoom intelligently combines data from its multiple cameras to do what’s known as "super resolution". This works similar to Canon’s Dual Pixel AF tech, where sets of information from images with a slightly different point of view are combined for a better result.

By using multiple lower resolution samples, the camera is able to create a better, higher resolution image. The Mate 20 Pro has the advantage of having multiple lenses at different focal lengths, so each one can help to fill in the image information needed at the various “zoom” levels. A Super Resolution processing is applied to enlarge the sides of an image by three times to improve the end result. In addition, compression artefacts are identified and suppressed, allowing the camera to obtain clearer details and textures than with conventional enlargement.

 

True Depth with Apple’s Portrait Mode

Shallow depth of field, or the bokeh effect as photographers call it, has always been a bit of a holy grail because it’s generally more pronounced when you use lenses with wider apertures. As you can imagine, these lenses also tend to be larger and more expensive, so bokeh has never been something you’d associate with smartphone cameras - until now.

Apple’s iPhone 7 Plus introduced a dual-camera system that leveraged two sets of images of the same object collected from slightly different angles to create a disparity map, which in turn provided the camera depth information for everything in the frame. And starting with the iPhone X, Apple added a TrueDepth front-facing camera and sensor system to power facial recognition features such as Face ID and Animojis.

More recently, the iPhone XR is able to do depth capture even though it only has a single rear camera. Again, this is possible through the Dual Pixel technique described earlier and the phone tapping into neural net software to create a highly detailed mask around the subject. This mask analyzes what part of the picture is a person and what isn’t, and preserves individual hairs and eyeglasses, so when the blurring effect is applied, it doesn’t affect your subject. The result is a nice approximation of the bokeh effect.

Apple has also improved its Portrait Mode so you can “change” the aperture used, by using computational photography to adjust the amount of bokeh to match what you would get if you varied the aperture setting on an actual old-school camera lens.

 

A version of this article first appeared in the Jan 2019 issue of HWM.

Loading...