NVIDIA Researchers created AI that generates 3D model of an object from a single 2D image


NVIDIA has announced that it has built an artificial intelligence to generate 3D models of an object, from just single 2D image of it. This is a huge leap of  technology as 3D models previously required more than one image in different viewing angles of an object. According to the NVIDIA, new system is dubbed as “differentiable interpolation-based renderer” (DIB-R).

“DIB-R” is capable of rendering a fully-textured 3D model from just one image less than 100 milliseconds. Here, NVIDIA has trained a neural network by using collection of images of an object captured from different angles and its final 3D models as training datasets. After two days of training process, the AI was able to predict 3D model of previously unanalyzed image. NVIDIA demonstrates this by feeding a photo of a bird to the trained neural network that it hadn’t seen before and system generates a 3D model of that bird by predicting its lighting, texture and depth information.



According to the blog posted by NVIDIA, this new AI can significantly change robotics, gaming, AR\VR, object tracking systems and other autonomous technologies like self-driving cars. Considering about the general use cases,  “DIB-R” will allow you to turn an old photo of your parents, your child’s drawing or favorite 2D games in to 3D.

“Imagine you can just take a photo and out comes a 3D model, which means that you can now look at that scene that you have taken a picture of [from] all sorts of different viewpoints. You can go inside it potentially, view it from different angles — you can take old photographs in your photo collection and turn them into a 3D scene and inspect them like you were there, basically,” NVIDIA’s director of AI ‘Sanja Fidler’ said to the Venture Beasts.

“DIB-R” is included as part of Kaolin, NVIDIA’s newest 3D deep learning PyTorch library that help people to get started on 3D processing using neural networks.

Leave a Reply

Your email address will not be published. Required fields are marked *