NVIDIA’s AI System “GauGAN” Turns Doodles into Stunning, Photorealistic Landscapes
A deep learning model developed by NVIDIA Research can turn rough doodles into photorealistic masterpieces with breathtaking ease. The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.
The interactive app using the model is named as GauGAN.
GauGAN could offer a powerful tool for creating virtual worlds to everyone from architects and urban planners to landscape designers and game developers. With an AI that understands how the real world looks, these professionals could better prototype ideas and make rapid changes to a synthetic scene.
GauGAN allows users to draw their own segmentation maps and
manipulate the scene, labeling each segment with labels like sand, sky,
sea or snow.
Trained on a million images, the deep learning model then fills in the landscape with show-stopping results: Draw in a pond, and nearby elements like trees and rocks will appear as reflections in the water. Swap a segment label from “grass” to “snow” and the entire image changes to a winter scene, with a formerly leafy tree turning barren.