قالب وردپرس درنا توس
Home / Technology / How Google's Frankenphone & # 39; Pixel 3 taught to take portrait photos

How Google's Frankenphone & # 39; Pixel 3 taught to take portrait photos



  AI learning helps Google Pixel 3 override optical information alone to help your smartphone camera separate the foreground from the background for better portrait mode images.

AI learning helps Google Pixel 3 override optical information alone to help your smartphone camera separate the foreground from the background for better portrait mode photos.


Google

A backlit portrait mode arrived last year in the Google Pixel 2 smartphone, but researchers used a hacked lump on five phones to improve how the feature worked in this year's Pixel 3.

Portrait Mode simulates shallow depth of field for higher-end cameras and lenses that focus your attention on the subject while setting the background in a smoothness of blur. It's hard to make that simulation good, though, and errors can pop out. For Pixel 3, Google used artificial intelligence technology to fix the errors.

To make it work, Google needed some photos to train his AI. Enter the quintet of phones that are lubricated, because everyone takes the same image from slightly different perspectives. These small differences in perspective allow computers to judge how far away each part of a scene is from the cameras and generate a "depth map" used to determine the background material to become blurred.

"We built our own custom Frankenphone rig that contains five Pixel 3 phones, along with a Wi-Fi based solution that allowed us to simultaneously capture photos from all the phones," researcher Rahul Garg and programmer Neal Wadhwa said in a Google blog post Thursday.

The technique shows how deeply new imaging software and hardware change photography. Smartphones have small image sensors that can not compete with traditional image quality cameras, but Google is in front of the pack of computational methods that can make things blurred, increase resolution, adjust exposure, enhance shadow detail, and take pictures in dark.

So where does the Frenchman come into everything? As a way to give a world view more like what we see with our own eyes.

Google built a five-phone "Frankenphone" to train its Pixel 3 AI to judge how far away the elements of a scene are. 19659003] Google

People can judge deep because we have two eyes separated by short distances. That means they look a little different scenes – a difference called parallax. With its iPhone 7 two years ago, Apple used parallax between its two rear-facing cameras for its first crack in portrait mode.

Google Pixel 2 and Pixel 3 have only simple rear-facing cameras, but each pixel in a picture from the phones is actually created by two light detectors, one left of a pixel area on one on the right half. The view on the left is slightly different from the right side and that parallax is enough to judge some depth information.

But not without problems, Google said. For example, it can judge only left-right parallax in a scene, not upside down parallax. So Google gives Pixel 3 a leg up with AI.

AI is good at adding other information in the mix – for example, small differences in focus or awareness that a cat in the distance is less than one close-up. The way AI is working today, but a model must be trained on real data. In this case, it meant taking quintets of pictures from Frankenphones with left and right and upside down parallax information.

The trained AI, combined with data from another AI system that detects people in pictures, produces Pixel's better portrait

CNET's Holiday Guide: The Place to Find the Best Technology Gifts in 2018.

Cambridge Analytica: All You need to know about Facebook's data mining scandal.


Source link