قالب وردپرس درنا توس
Home / Technology / Google AI details how Pixel 3 captures and selects Top Shot

Google AI details how Pixel 3 captures and selects Top Shot



Top Shot is one of the many AI-powered camera features Google introduced with Pixel 3. Google AI now describes how the smartphone works and what features the phone is looking for when proposing an alternative frame.

At a high level, Top Shot stores and analyzes 1.5 seconds before and after pressing the shutter button. Up to 90 images are taken, while Pixel 3 selects up to two alternative images to store in high resolution.

The shutter frame is processed and stored first. The best alternative shots are saved afterwards. Google's Visual Core on Pixel 3 is used to treat these top alternative shots like HDR + images with very little extra latency, and is embedded in the Motion Photo movie.

The work with Google Cut inspired Pixel 3 feature, with the company that creates a computer model to recognize three key attributes associated with the "best moments".

  • Functional qualities like lighting
  • Objective attributes (are the subject's eyes open? Smile they?)
  • Subjective qualities as emotional expressions
  • Our neural network design detects low level visual attributes in early layers as if the subject is unclear, then deducts additional calculation and parameters against more complex objective attributes as if the subject's eyes are open and subjective characteristics as if it is an emotional expression of pleasure or surprise.

    According to Google, Top Shot prioritizes face analysis, but the company has also worked to identify "good moments whose faces are not the main subject." It created additional calculations for the overall frame quality score:

    • Motion Speed ​​Ratio ̵
      1; Low resolution optical power between current frame and previous frame is estimated in ISP to determine whether there is prominent object motion in the scene.
    • Global motion sharpness test – calculated from camera motion and exposure time. The camera's motion is calculated from sensor data from the gyroscope and OIS (optical image stabilization).
    • "3A" results – Auto exposure, auto focus and auto white balance status are also considered.

    All individual points are used to train a model that predicts a total quality score that is consistent with the framework of human rating, in order to maximize product quality in the end.

    During the development process, Google took into account what users perceive as Best Shots. It collected data from hundreds of volunteers, and asked which frames were best. Other steps taken include improvements to avoid blurring and handling multiple faces.

    More about Pixel 3:


    Check out 9to5Google on YouTube for more news:


    Source link