While companies were eagerly putting out smartphones with dual cameras, Google stuck with a single camera but instead opted to use AI to help boost its photos quality. It worked out remarkably well, although the company’s subsequent handsets did eventually include more than one camera.

That being said, Google continues to push out camera software that still relies on AI to enhance the photos it takes, and in a recent blog post, Google revealed how they trained their portrait lighting AI to achieve very realistic portrait lighting effects that would have otherwise required external lighting and more expensive equipment.

According to Google, “We generated training data by photographing seventy different people using the Light Stage computational illumination system. This spherical lighting rig includes 64 cameras with different viewpoints and 331 individually-programmable LED light sources. We photographed each individual illuminated one-light-at-a-time (OLAT) by each light, which generates their reflectance field — or their appearance as illuminated by the discrete sections of the spherical environment.”

The results are actually very impressive and can give your subjects a better look under various lighting conditions, even if it might not be so ideal. Granted, using actual lighting might have better results, but for a smartphone to achieve this purely through the use of software is an amazing feat, and we expect that over time it will get even better.

Filed in Cellphones >Photo-Video. Read more about and . Source: ai.googleblog

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading