Nvidia’s new AI converts real-life videos into 3D renders
NVIDIA has announced the groundbreaking Artificial intelligence which enables the developers for the first time to render the entirely synthetic, interactive 3D environments with the help of a model trained on the real world videos.
The technology offers the potential to quickly create a virtual world for automotive, gaming, robotics, architecture and virtual reality. The network can, for example, to generate the interactive scenes which are based on the real world locations or show consumers dancing like their favorite pop stars.
“NVIDIA has been inventing new ways to generate interactive graphics for 25 years, and this is the first time we can do so with a neural network,” said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA, who led the team developing this work. “Neural networks — specifically generative models — will change how graphics are created. This will enable developers to create new scenes at a fraction of the traditional cost.”
The result of the research is a simple driving game that allows the participants to navigate an urban scene. All the content is rendered interactively with the help of a neural network that transforms sketches of a 3D world which is produced by a traditional graphics engine into the video. This interactive demo will be going to show at the NeurIPS 2018 conference in Montreal.
The generative neural networks learned to model up the appearance of the world which includes the materials and their dynamics. Since the scene is fully synthetically generated, as it can easily be edited to remove, add and modify objects.
“The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents,” the researchers wrote in their paper. “Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics,” the researchers explained.