قالب وردپرس درنا توس
Home / Technology / Google used machine learning to improve portrait mode on Pixel 3 and Pixel 3 XL

Google used machine learning to improve portrait mode on Pixel 3 and Pixel 3 XL



Pixel 3 and Pixel 3 XL both have one of the best camera systems available on a smartphone today. Still, Google makes it work with only one camera on the back of both phones. Even without another camera on the back, the phone still produces bokeh effect in portrait mode thanks to the use of software and other processing tricks. In a blog post published today by Google, the company explains how it can predict depth on Pixel 3 without the use of another camera.
Last year, Pixel 2 and Pixel 2 used the XL Fase Detection Autofocus (PDAF), also known as dual pixel autofocus, along with a "traditional unlearned stereo algorithm" to take portraits on the other genix. PDAF captures two slightly different views of the same scene and creates a parallel effect. This is used to create a depth map required to achieve the bokeh effect. And while the 201
7 models take great portraits that can be made weaker or stronger, Google wanted to improve portrait mode for Pixel 3 and Pixel 3 XL.

While using PDAF works ok, there are factors that can lead to errors when estimating depth. To improve the depth estimation with the Pixel 3 models, Google has added new new characters, including comparison of non-focus images in the background with sharply focused images closer. It is known as defocus depth cue. Counting the number of pixels in an image of a person's face helps to estimate how far away the person is from the camera. It is known as a semantic cue. Google needed to use machine learning to help create an algorithm that would allow Google to combine the signals for a more accurate depth estimate. To do this, the company needed to train the neural network.

Training of the network required many high quality PDAF images and depth maps. So Google created a thing that fits five Pixel 3 phones at the same time. With Wi-Fi, the company took pictures from all five cameras simultaneously (or within about 2 milliseconds of each other). The five different views gave Google permission to create parallax in five different directions, which helped create more accurate depth information.

Google continues to use the Pixel cameras to market the phones. A series of videos called & # 39; Unswitchables & # 39; shows different phone owners who are testing Pixel 3 to see if they will eventually switch from their current phones. In the beginning, most of them say that they will never change but will be won at the end of each episode of the camera and some of Google's features.

// Download SDK Asynkront (function (d) { where js, id = & # 39; facebook-jssdk & # 39 ;; if (d.getElementById (id)) { return; } js = d.createElement (& # 39; script & # 39;); js.id = id; js.async = true; js.src = "http://connect.facebook.net/en_US/sdk.js"; d.getElementsByTagName (& # 39; head & # 39;) [0] .appendChild (js); }(document));
Source link