Creating 3D models can be tricky because unlike 2D images which are “flat”, 3D models aren’t and as such, designers need to consider all the various angles of the model. Naturally, this also means that creating 3D models could potentially take a lot longer to build and render compared to 2D.

However, NVIDIA Research could soon have a more efficient solution where they have developed an AI system that has the ability to turn 2D images into 3D models. This process was developed by researchers from NVIDIA Research along with other researchers from Vector Institute, University of Toronto, and Aalto University.

Speaking to VentureBeat, NVIDIA’s director of AI and one of the paper’s co-authors, Sanja Fidler, “Imagine you can just take a photo and out comes a 3D model, which means that you can now look at that scene that you have taken a picture of [from] all sorts of different viewpoints. You can go inside it potentially, view it from different angles — you can take old photographs in your photo collection and turn them into a 3D scene and inspect them like you were there, basically.”

As Fidler notes, systems that transform 2D images into 3D models aren’t exactly new and have been explored before by Facebook and Google. However, one of the main differences is that NVIDIA’s model can predict more 3D properties such as its shape, geometry, color, texture, and lighting.

Filed in Computers. Read more about and . Source: venturebeat

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading