How Apple's LiDAR Sensor Differs From The One On Its 'TrueDepth' Face ID

Apple's first use of LiDAR was introduced alongside the 2020 iPad Pro.

The technology uses a similar method to its famous Face ID face recognition, but somehow differs in a lot of ways.

In a teardown session with iFixit, disassembling a 2020 iPad Pro revealed a camera module that packs a 10MP ultra-wide lens, a 12MP wide camera, and a scanner, or LiDAR (Light Detection and Ranging).

Here, the LiDAR is composed of two modules with its lens mounted overlapping one another.

Consisting of a transmitter, the VCSEL and a receiver sensor, with the first emits a series of points in the infrared which are detected by the sensor.

In the teardown, iFixit found that the LiDAR system emits a regular pattern of points, which is actually less that the one used by the camera TrueDepth.

What this means, the 2020 iPad Pro's LiDAR is not designed for applications in the style of the Face ID.

This is because the depth mapping LiDAR is capable of, is made purposefully for a larger range depth mapping over a wider range, instead of a more in-depth and finer measurements for scanning a face.

Apple's LiDAR system in the 2020 iPad Pro is unlike its Android counterpart.

Instead of the ToF (Time-of-Flight) method on Android that uses laser or an laser-pulsed LED to scan an entire scene, Apple uses a point-by-point with a laser beam to scan an entire scene. This method allows it to scan objects that are up to 5 meters away, as opposed to the 'regular' ToF sensors that can only operate within the distance of around two meters.

Apple believes that its LiDAR sensor can immensely benefit Augmented Reality (AR) experiences. AR is better known as the technology that overlays information and virtual objects on real-world scenes in real-time.

But if compared to Face ID, Apple's LiDAR in the 2020 iPad Pro scans an object using dots with over a 2D space in order to judge the 3D distance, with a far sparser laser grid.

Another way of saying it, LiDAR specializes in measuring the time taken for each beam of light to reach and bounce back, hence the name 'ToF', to create a depth map of something or things that are further away, while Face ID instead focuses on scanning a 3D model of an object that is far closer, in a more detailed depth map.

LiDAR vs. Face ID
Infrared dot projections for the LiDAR module (left) that is sparser that the ones used on Face ID (right)

According to Apple, its Face ID works by projecting more than 30,000 invisible infrared dots onto a face and producing a 3D mesh.

To do this, the technology uses "true depth camera system", including an infrared camera, flood illuminator, dot projector, and proximity sensor.

The invisible infrared light helps Face ID to identify the face of the phone's user when the surrounding is dimly lit.

The dot projector then makes the tens of thousands of invisible infrared dots projected to the user's face to build a unique but identifiable facial map from the shape and contours of the face.

Then the infrared camera reads that dot patterns by capturing the infrared image.

These are the hardware LiDAR doesn't have.

Because of this particular approach in creating a 3D model, Face ID is purposefully built for authentication, whereas LiDAR's ToF is better seen as a 3D camera.

Following the 2020 iPad Pro, the next Apple products that come equipped with LiDAR, are the iPhone 12 Pro and the iPhone 12 Pro Max.