Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.
In an announcement at the Computer Vision and Pattern Recognition conference in New Orleans this week, Nvidia unveiled its new tool for designers to create digital assets. Called 3D MoMa, the new method takes advantage of inverse pipeline rendering to create 3D objects from 2D images.
You may have heard of this before — Nvidia’s research into neural radiance fields can create 3D scenes from 2D images. However, a major difference exists between the two. Objects created with 3D MoMa are triangle mesh models that are immediately available for import into graphics engines.
As a demonstration, Nvidia’s research and creative teams modeled jazz instruments using 3D MoMa. The process take about an hour per object on a single Nvidia Tensor core GPU. The team then brought the newly created models into Nvidia 3D Omniverse and modified their attributes on the fly. Once this technology is widely available, the amount of time it saves per object could be immense. It also allows designers to tweak models, improving finished their quality, instead of making them from scratch.
Inverse rendering “has long been a holy grail unifying computer vision and computer graphics,” said David Luebke, vice president of graphics research at NVIDIA, in an email to GamesBeat. “By formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit, and extend without
Read more on venturebeat.com