A Time of Flight Camera, also known as ToF Camera or ToF Sensor, is a depth sensor which works by sending a pulse of infra-red light into the scene to measure the distance to every pixel within range. It relies on how much time it takes IR light to be reflected back to the camera. Using the constant speed of light, the distance can be calculated with simple math.
Time of Flight is a general concept that applies to many fields and is not limited to using the speed of light. For example, there is an Ultrasound ToF in VR. For smartphones, ToF systems take the form of a camera which produces depth-maps data, similar to a Z-buffer in computer graphics.
The Depth map (grayscale image) in the photo below visualizes the kind of data we want from a ToF camera.
Infra-Red ToF Principles
The fundamental principle is relatively simple: a ToF Camera captures distance instead of colors. You can think of each ToF sensor pixels as a Stopwatch. All pixels start counting time at the same time when the light is emitted from the sensor (start of flight).
However, they stop the clock individually as the signal is reflected back at slightly different moments (end of flight). The time it takes for light to be projected then reflected (start and end) is called the “time of flight” (of the light).
Because the distance varies between various objects reflecting the light, sensor pixels don’t all stop at once. With elementary math, the distance can be calculated from the captured data. The implementation can be quite complex, with the possibility of multiple reflectances (light bouncing on more than one object before arriving on the sensor).
ToF Cameras vs. Stereo 3D Dual-Cameras
Although ToF cameras have a relatively low-resolution (320×240 to 640×480), they outperform a dual-camera depth-sensing. ToF is a fundamentally better and more natural way to measure depth.
Applications in smartphones include higher-quality Bokeh (out of focus blur) or Augmented Reality (AR) applications such as VR-headset tracking. These technologies have been introduced in large-devices with Google’s Project Tango and even implemented by Lenovo in a handset.
But it is only recently that phones such as the Honor View 20, Huawei P30 Pro, LG G8, and Galaxy S10 5G have integrated much more compact ToF cameras that don’t affect the design footprint significantly.
Dual-RGB camera stereo capture is the depth sensing which is the most power-hungry but it has the advantage of featuring two RGB cameras that have a real photographic purpose: the primary camera and an ultrawide or zoom camera. For that reason, it is less expensive than adding a ToF.
However, using the Ultrawide camera for stereo 3D capture reduces the depth precision in the center, while zoom lenses beyond 80mm may have a field of view that is too narrow for practical Bokeh usage.
ToF In Direct Sunlight
To avoid interferences as much as possible, some Time of Flight sensors encode the light pulse with a specific modulation when it is emitted (it’s like a binary code for light). The receiving sensor can then process/filter the incoming light and ignore powerful light sources that would otherwise degrade the accuracy of the measurements.
Smartphone ToF cameras have a limited range due to the relatively low power of the light source. It’s usually more than enough for Bokeh photos as the range can be a few yards.
Different IR ToF Types
Smartphones have camera-based ToF sensors, but there are other implementations of that technology using lasers. The big difference between the two is that the camera ToF has an illuminator that will diffuse the light in all directions (half-sphere).
Laser-based ToF systems like Lidars (LIght Detection And Ranging) send a laser in a single direction but it can reach a lot farther, sometimes many kilometers if the laser is powerful enough. These systems to scan the scenes point by point at extremely high speeds to produce enough data.
ToF vs. Structured Light
People often confuse Time of Flight with Structured Light, the kind of technology that powers the original Kinect and the iPhone X Face ID feature. While it is possible to convert structured light data into a depth map, it’s not the most accurate way to do it.
Structured Light consists of projecting a pattern onto a surface in order to reveal the surface’s bumps and ridges. Distance can then be computed by triangulation from the original pattern. It works very differently, and you can easily distinguish the two if given the opportunity to see the structured light pattern, like in this video:
To compare the two technologies’ performance, we can conveniently look at Kinect 1 (structured light) vs. Kinect 2 (ToF). Mouser electronics also has a great comparative table. Kinect 2 increased the resolution and accuracy of the system by a wide margin. You can see the demo in the video below.
For photographic usage, Time of Flight sensors also offers depth data that requires MUCH less data processing when compared to structured light, dual-RGB cameras or Lidar, so it’s a particularly valid power-consumption choice for smartphones.
ToF Cameras Origins
ToF depth sensing was initially applied to civil engineering as a way to measure with higher speed and accuracy. If used from the air or even from space, ToF technology can accurately map any surface with extremely high details.
As the technology was miniaturized, it made its way into self-driving vehicles, and now smartphones. ToF Cameras is a very vast topic, and there is much more to learn, but as far as smartphones are concerned, you should now have good idea of its purpose along with its pro and cons.