Google does not have any traditional, dedicated hardware to power face unlock on the Pixel 6 Pro, which we learned last week was originally scheduled for the launch instead. Here is how it might operate, in contrast to the prior implementation on the Pixel 4 series.
Face unlock on the Pixel 4 (and iPhones Face ID system ) begins with a flood illuminator flashing infrared light in your direction. Thousands of tiny points are then projected onto your face by a dot projector. The image is taken by one or more IR cameras, which then compare it to the face unlock model that was previously saved during setup.
In terms of Google’s most recent Pixel phones, the front-facing camera on the 6 Pro has more pixels (11.1 vs. 8MP), a wider field of vision (94 vs. 84 degrees), a faster aperture (/2.2 vs. /2.0), and a wider pixel size (1.22 vs. 1.12m) than the normal Pixel 6. The Sony IMX663 on Google’s larger phone supports the dual-pixel auto-focus technology (DPAF), whereas the IMX355 on the smaller variant does not. This is the most significant distinction.
The front-facing camera on the Pixel 6 Pro specs page does not specify dual pixels, which is odd because Asus used the IMX663 on the Zenfone 8 to confirm its existence.
Since the Pixel 2, Google has been employing DPAF to create depth maps for Portrait Mode using just a single lens, which includes the front-facing camera. In a blog post titled from 2017 , Google outlined:
If the (small) lens of the phone’s rear-facing camera were divided into two halves, the world would appear slightly different when viewed through its left and right halves, respectively. Although the distance between these two perspectives is less than 1mm (approximately the diameter of the lens), they are enough distinct to compute stereo and generate a depth map.
Google further enhanced DPAF-generated depth estimation with machine learning on the Pixel 3 to an year later . By utilizing twin pixels and dual rear cameras, the Pixel 4 in 2019 substantially enhanced Portrait Mode.
The contours of your face could be captured using a depth map created by DPAF, but Google also has depth-from-motion algorithms developed for ARCore that just need a single RGB camera:
The depth map is made by capturing numerous pictures from various perspectives and comparing them while moving your phone to determine how far away each pixel is.
Google has Tensor for quicker ML processing, and the company has already bragged about how the processor enables quicker, more accurate face detection in photos while consuming less power.
By utilizing the capabilities that the business has been working on for years between computational photography and machine intelligence, these building blocks may be how Google brings face unlock to the Pixel 6 Pro.
MORE ABOUT PIXEL 6: Appreciate the tip,
FTC: We employ automatically earning affiliate connections. More.
Face ID system 0