قالب وردپرس درنا توس
Home / Technology / Google explains Pixel 3's enhanced AI portraits

Google explains Pixel 3's enhanced AI portraits



When Pixel 2 was launched, Google used a neural network and the camera's phase-set autofocus (the parallax effect it offers) to determine what's in the foreground. This does not always work when you either have a scene that does not change much, or one shoots through a small aperture, though. Google tackled this with Pixel 3 by teaching a neural network to predict the depth using a myriad of factors, such as the typical size of objects and sharpness at various points in the scene. You should see fewer glitches that tend to appear in portrait photos, such as still objects that are still sharp.

Some creative technology is required to train this neural network, Google said. The company created a "Frankenphone" with five Pixel 3s to capture synchronized depth data, eliminating the aperture and parallax issues.

While you may not be happy that pixels still depend on single cameras for images (thus limiting your photographic capabilities), this illustrates the benefit of using AI. Google can improve image quality without linking it to hardware upgrades.


Source link