Quad-Bayer Camera Sensors For Better Photos

The Quad-Bayer pixel structure is a specific sensor pixel layout that allows for three distinct modes: two-exposures HDR, quad-binning and normal. It has been introduced in phones such as the Huawei P20 Pro and has recently made more noise with the introduction of new handsets such as the Honor View 20 as the OEMs publicize this advanced feature in their communications.

In a nutshell, the Quad-Bayer filter is an extension of the classic Bayer RGB pixel pattern being used in most color image sensors and has three modes: normal, dual-exposure and pixel binning.

With these modes, the camera sensor can (in theory) feature very good performance in daylight (high-resolution), low-light (binning) and high contrast scenes (2-exposures). Previously, it was thought to be impossible to have both high-resolution and great low-light performance.

First, a quick recap about classic Bayer

Quad-Bayer (left) to Bayer (right) | Courtesy of Sony

Let’s step back to quickly show what the regular Bayer pattern is (photo above, right), and you’ll see how Quad-Bayer pushes it to the next level. Camera sensors are built to convert light (photons) into electricity. This also means that they can only see the light intensity and not colors by default. The pattern was named after Bryce Bayer, its inventor (at Kodak).

To capture colors, a layer with an RGB Bayer pattern is placed on top of the sensing surface. From there, each pixel only receives photons from a specific color (Red, Greed or Blue) and the image looks like a mosaic. Later, the software will de-mosaic the raw sensor data to build a regular colored image.

We’ve added a video that explains how the Bayer filter works at the end of this article.

Quad Bayer is great for single-frame HDR and low-light noise reduction

The Quad-Bayer color filter array was initially introduced by Sony to cater to the high-resolution security camera market, which is very lucrative.

In Low-light situations, many camera systems would use multi-frame photography to reduce noise or capture at multiple exposures. However, this may also introduce ghosting when objects are moving, which unfortunately degrades the image quality of security video footage.

Normal Pixel Binning (left) and HDR (right) modes

The Quad-Bayer pixel structure enables having two exposures within a group of four pixels, thus eliminating almost all the ghosting issues. This is called the Quad Bayer HDR Mode.

Another mode is to have one exposure, but with a classic pixel binning for low-light situations where a high-dynamic range isn’t required. We have explained this at length, so just follow the link.

High-resolution by necessity

Quad-Bayer sensors need to have a very high resolution because they essentially have each pair of pixels within a quad do sensing with different parameters, thus reducing the effective resolution by 4X. This is typically why Quad Bayer cameras often use a default mode of 10MP or 12MP and not 40MP or 48MP.

For example, Sony IMX586 camera sensor has 8000×6000 pixels (official pr linklaunched in July 2018. However, the auto-mode in phones like the Honor View 20 is set to 12MP.

Sensor manufacturers can take advantage of having much better semiconductor tooling to build these impressively small sensor pixels. However, a 0.8-micron pixel size is tiny, which is why the main goal is to work in groups of four (a “quad”).

There is also a “normal” mode in which pixels work individually. In that case, and thanks to the sheer number of pixels, it is possible to capture finer details but it works best in bright light conditions because the individual sensor pixels are so small.

Sony image that compares 12MP vs. 48MP. Not quite 4X resolution, but some progress nonetheless

Within a quad, all pixels have the same color filter, so this is different from a normal high-resolution sensor with a classic Bayer pattern. One could argue that the color perception may be inferior or harder to de-mosaic, but we have not seen any hard data, even though the argument does make a lot of sense.

The good thing about the quad-RGB filter configuration is that it reduces some of the bad effects of cross-talk, which means that light meant to hit a specific sensor pixel also leaks into the neighbors, thus reducing the sharpness. That’s particularly true for small pixels (less than 1 micron) and is a topic of Research (PDF link).

Possible happy side-effects for computational photography

Example of burst photography from Google. Merging up to 10 photos to produce the final one

Behind the scenes, increased resolution, whether perceptible or not humans, can increase the accuracy of various algorithms such as multi-frame photography that have become a foundation for modern digital photography.

For example, having more details can greatly help a realignment algorithm to stack images together for a noise-reduction or ultra-long exposure technique. Having more details can improve feature detection, which is the foundation of any advanced multi-frame algorithm.

The caveat for all of this is the enormous processing power required to even look at all the data, let alone process a long stream of it. That’s why devices with faster processors (SoC) have more opportunities for image processing improvements. Qualcomm’s Snapdragon 855, Huawei’s Kirin 980 and Apple’s A12 are all extremely powerful hardware platforms.

Conclusion

The idea of clustering sensor pixels isn’t completely new: ~10 years ago, Fujifilm announced such a concept in the form of the Fujifilm Super CCD EXR technology (wikipedia link), which at the same had a similar concept with clusters of two pixels (instead of today’s four).  That technology had the same modes (HDR, binning) and required halving the resolution if dual-sensing was required.

Quad-Bayer is a very clever and practical technique that fulfills the role it was built for: improve low-light video and photo capture of moving subjects. It should help with hand-shaking too.

It is important to understand that these sensors are not optimized to run at full-resolution (although they can), but mainly to extend the HDR capabilities of the camera. Fine details are possible -under specific conditions- but it wasn’t the primary goal when the technology was created.

Perhaps the biggest challenge is for the camera software to determine when exactly to use one of the specific sensor modes. This can be very difficult, and using the wrong mode at the wrong time could be counter-productive.

However, when used in the right scenario, Quad-Bayer should have a visible impact on the image quality.

You May Also Like

Popular Right Now

Exit mobile version
Exit mobile version