Categories
Misc

GANcraft: Turning Gamers into 3D Artists

GANcraft: A hybrid unsupervised neural rendering pipeline for voxel worldsGANcraft is a hybrid neural rendering pipeline to represent large and complex scenes using Minecraft.GANcraft: A hybrid unsupervised neural rendering pipeline for voxel worlds

Scientists at NVIDIA and Cornell University introduced a hybrid unsupervised neural rendering pipeline to represent large and complex scenes efficiently in voxel worlds. Essentially, a 3D artist only needs to build the bare minimum, and the algorithm will do the rest to build a photorealistic world. The researchers applied this hybrid neural rendering pipeline to Minecraft block worlds to generate a far more realistic version of the Minecraft scenery.

Previous works from NVIDIA and the broader research community (pix2pix, pix2pixHD, MUNIT, SPADE) have tackled the problem of image-to-image translation (im2im)—translating an image from one domain to another. At first glance, these methods might seem to offer a simple solution to the task of transforming one world to another—translating one image at a time. However, im2im methods do not preserve viewpoint consistency, as they have no knowledge of the 3D geometry, and each 2D frame is generated independently. As can be seen in the images that follow, the results from these methods produce jitter and abrupt color and texture changes.

MUNIT      SPADE       wc-vid2vid      NSVF-W       GANcraft

 A side by side comparison of past voxel neural rendering pipelines: MUNIT, SPADE, wc-vid2vid, NSVF-W, and GANcraft. You can see the renderings don't hold up as consistently as the GANcraft methodology; blending and distortion occur.
 A side by side comparison of past voxel neural rendering pipelines: MUNIT, SPADE, wc-vid2vid, NSVF-W, and GANcraft. You can see the renderings don't hold up as consistently as the GANcraft methodology; blending and distortion occur.
 A side by side comparison of past voxel neural rendering pipelines: MUNIT, SPADE, wc-vid2vid, NSVF-W, and GANcraft. You can see the renderings don't hold up as consistently as the GANcraft methodology; blending and distortion occur.
 A side by side comparison of past voxel neural rendering pipelines: MUNIT, SPADE, wc-vid2vid, NSVF-W, and GANcraft. You can see the renderings don't hold up as consistently as the GANcraft methodology; blending and distortion occur.
Figure 1. A comparison of prior works and GANcraft.

Enter GANcraft, a new method that directly operates on the 3D input world. 

“As the ground truth photorealistic renderings for a user-created block world simply doesn’t exist, we have to train models with indirect supervision,” the researchers explained in the study

The method works by randomly sampling camera views in the input block world and then imagining what a photorealistic version of that view would look like. This is done with the help of SPADE, prior work from NVIDIA on image-to-image translation, and was the key component in the popular GauGAN demo. GANcraft overcomes the view inconsistency of these generated “pseudo-groundtruths” through the use of a style-conditioning network that can disambiguate the world structure from the rendering style. This enables GANcraft to generate output videos that are view consistent, as well as with different styles as shown in this image!

Figure 2. GANcraft’s methodology enables view consistency in a variety of different styles.

While the results of the research are demonstrated in Minecraft, the method works with other 3D block worlds such as voxels. The potential to shorten the amount of time and expertise needed to build high-definition worlds increases the value of this research. It could help game developers, CGI artists, and the animation industry cut down on the time it takes to build these large and impressive worlds.

If you would like a further breakdown of the potential of this technology, Károly Zsolnai-Fehér highlights the research in his YouTube series: Two Minute Papers:


Figure 3. The YouTube series, Two Minute Papers, covers significant developments in AI as they come onto the scene.

GANcraft was implemented in the Imaginaire library. This library is optimized for the training of generative models and generative adversarial networks, with support for multi-GPU, multi-node, and automatic mixed-precision training. Implementations of over 10 different research works produced by NVIDIA, as well as pretrained models have been released. This library will continue to be updated with newer works over time. 

If you’d like to dive deeper, grab the code on GitHub from the Imaginare repository, see the overview of the framework or read the detailed research paper.

Stay updated on more exciting research from NVIDIA at NVIDIA Research.

The study authors include Zekun Hao, Arun Mallya, Serge Belongie, and Ming-Yu Liu.

Leave a Reply

Your email address will not be published. Required fields are marked *