It is safe to say that many were surprised with how good the camera on the Pixel 2 phones was. This is because in the past, Google’s Nexus lineup was never really known for having outstanding cameras, and with the Pixel 2 achieving Portrait Mode purely using software (it doesn’t have a dual lens setup), it was rather impressive.

The good news for developers who wish to take advantage of the Pixel 2’s Portrait Mode is that Google has released (via 9to5Google) the deep learning model behind the technology. This means that developers will be able to get a peek under the hood and potentially apply Google’s technology to their own creations.

In a post on Google’s blog, the company explains a bit on how the Pixel 2’s Portrait Mode works. This is done by “pinpointing the outline of objects”, which in turn “imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.”

They also point out that this kind of technology would have been hard to imagine five years ago, and note that, “We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.”

Filed in Cellphones >Photo-Video. Read more about , and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading