Pixel 3 experienced a leakage flood before it was launched, but Google managed to keep some unique software features under cover. One of the most unique software features included is Top Shot. If you are not familiar, Top Shot will take up to 90 images for 3 seconds before, during and after pressing the shutter button. After that, analyze the images in real time and present the original image when you press the shutter button, plus two recommended options.
Filtering these photos for recommendations is a difficult process. In a post on his AI Blog, Google has broken down the work behind Top Shot. For example, when it comes to analyzing images, Top Shot uses features like smear detection, light analysis and emotional expressions. This provides qualitative, objective and subjective factors in the analysis.
With this analysis, Google needed high accuracy. As such, data gathered from hundreds of volunteers who rated which pictures (of 90) they liked best, with feedback about why they preferred that image. These data were used to improve Top Shot quality statistics so that they could consistently choose the image that people perceived as best.
Processing these images requires a lot of work, and Google speaks Pixel's Visual Core as a power plant for the process. All images are analyzed on the device and in real time for privacy and speed.
There are many technical details for the process. If you want to nerding, follow the link below to learn more about Top Shot from Google.