NVIDIA Uses Machine Learning to Extract 3D Models from 2D Images

Published December 13, 2019
Advertisement

NVIDIA has published research demonstrating the use of machine learning to infer a 3D model from a single 2D image. From the blog:

In traditional computer graphics, a pipeline renders a 3D model to a 2D screen. But there’s information to be gained from doing the opposite — a model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.

NVIDIA researchers wanted to build an architecture that could do this while integrating seamlessly with machine learning techniques. The result, DIB-R, produces high-fidelity rendering by using an encoder-decoder architecture, a type of neural network that transforms input into a feature map or vector that is used to predict specific information such as shape, color, texture and lighting of an image.

The paper, "Learning to Predict 3D Objects with an Interpolation-Based Renderer" is available here.


Cancel Save
2 likes 1 comments

Comments

gaven

During my recent online journey, I made an exciting discovery - a website harboring an extensive collection of stock photos. These carefully selected images encompass a wide spectrum of themes and aesthetics, making them snow vector an invaluable resource for enhancing my creative projects. Whether I'm working on website design, marketing campaigns, or content creation, these stock photos are poised to elevate the visual allure of my work.

August 22, 2023 03:12 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement