qualcomm-clear-sightQualcomm has just shed some light over its Clear Sight camera technology. Clear Sight can exploit two camera modules with identical sensors to improve imaging performance by having one capture color, and the other one capturing brightness with a slightly different setup. Let’s explore the properties and caveats of such technique.

Qualcomm pitches the idea as to be a good analogy to human eyes, where each eye has two sets of sensors: the Cones and the Rods. The Cones are there to capture color variations, while the Rods are more about capturing the brightness, although they do capture some color too.

Color and brightness sensors in the Human Eye

light-though-eye-big

In the human eye, the Cones and Rods are not spread equally on the surface. The cones are concentrated in a central part of the eyes called the Macula, especially in the Fovea Centralis, a region which is almost exclusively dedicated to Cones. It makes sense since the Fovea Centralis is where you see most of the details. It’s where you’re “looking at” Other parts of the eyes end up being gathering data for your peripheral vision.

It is because of this concept of “sharp” vision vs. less-sharp, for lack of a better term that VR-related companies have been looking into Foveated Rendering, or the concept of lowering details when away from the center from the visual attention. But this is yet another story…

Dual-sensor implementation in Qualcomm Clear Sight

image-sensor-color-filterWhile Qualcomm’s Clear Sight technology can find a good analogy in the human eye, it works in a much simpler way: a monolithic camera module composed of two cameras will host two sets of identical sensor and lens. However, one of the sensors is going to have its color-filter removed, which means that light-sensing pixels can sense more light (all spectrums). (here’s a basic run down of how image sensors work)

As both image sensors capture live data, the information has to be processed and combined. This is done by the Image Signal Processor(s) or ISP on board the system on chip (aka SoC) – in the case, the Snapdragon 820 and Snapdragon 821 at publishing time. Both chips conveniently have two ISP which offer the horsepower required to compute all of this.

At the moment, it seems that the sensor fusion works with two identical camera modules and optics (minus color-filter), but in theory, it would be possible to do the same thing with two different camera modules at some point. Apple seems to be doing it on the iPhone 7+.

Qualcomm has not yet announced such capabilities, but –in theory– a software update could bring this ability to phones such as the LG G5 and the LG V20 at some point. They both have a normal lens, and a wide lens with different optics, f-stops and resolutions — and that makes things much harder.

Sought-after results and caveats

In theory, it is possible to use sensor fusion* and Computational Photography algorithms to obtain a better final image. Using sensor fusion on multiple sensors has been a proven method used in Astronomy and even tested successfully with consumer-level cameras such as the Light Camera, which has more than a dozen phone-like camera modules.

*sensor fusion: (combining data from multiple sensors into one final result: here, the photo)

But the concept has yet to be truly proven on smartphones. Huawei with its Huawei P9 handset has introduced a similar concept of dual-sensing cameras which are split into color and brightness sensing. The thing is: the Huawei P9 implemented this, but didn’t use the absolute best mobile sensors available today.

In the end, our review of the P9 handset revealed that while the dual-lens setup was promising, it does not yet challenge a better single sensor+lens setup such as the Galaxy 7-series (S7/S7 Edge/Note 7), also powered by Qualcomm’s Snapdragon 820.

Don’t miss: Huawei P9 review and photo comparisons.

Don’t miss: Galaxy S7 / S7 Edge Review and Galaxy Note 7 Review

The final caveat of most multi-camera module setups (when used with sensor fusion), is that they often cannot use Optical Image Stabiliztion at the camera module level. That’s because each camera stabilization system could make the optics move in an unsynchronized manner.

Qualcomm has confirmed to Ubergizmo that it is NOT the case for Clear Sight, which does support independent OIS on both lenses. The ISP is powerful enough to run software that reconciles the differences in movement for both modules.

Learn more: what is Image Stabilization?

The data fusion is more difficult, because the camera modules are supposed to be aligned to fit within a certain error tolerance. This is the first dual camera with fusion sensor AND with support for independant OIS that we’ve heardof.

Conclusion: multi-sensor is here to stay

Clear Sight and other multi-camera techniques based on sensor fusion have a great potential. The theory is sound, and there are proofs of concepts out there that are convincing. The real question is this:

When does multi-camera becomes truly better, and by how much? The jury is still out on this one.

Most likely, it has more to do with how much hardware phone makers are willing to throw at the problem, than with Clear Sight itself. If an OEM was to be willing to put two of the best available camera modules available, then the likelihood of obtaining better images should be 100% — what are the odds of this happening?

We’re not sure since it’s more tempting for many to use two cheaper camera modules for marketing purposes, or cost-efficiency balance. Only time will tell, but when OEMs are ready and willing, technologies like Clear Sight will be ready.

One thing is certain: multi-cameras are in the future of smartphones and other imaging devices. Camera modules are limited by the thickness and sizes of their host devices. At the same time, image quality will become limited by the physics of light and what the sensors can gather. Sooner or later, the most efficient way forward will be to add more cameras. It’s only a matter of time.

Filed in Cellphones. Read more about and .