Nvidia introduces a new method to train AI models using limited data sets. Using minimal study material required for a general GAN, it can now learn complex skills, be it recreating images of cancer tissue or emulating famous painters.
The researchers at Nvidia have reimagined artwork based on less than 1,500 images from the Metropolitan Museum of Art. It was made possible by adopting a unique neural network training technique to the StyleGAN2 model.
StyleGAN2 is Nvidia’s open-source GAN that consists of two cooperating networks, a generator for creating synthetic images and a discriminator that learns what realistic photos should look like based on the training data set. StyleGAN2 has trained various AI models such as GauGAN – an AI painting app, GameGAN – a game engine mimicker, and GANimal – a pet photo transformer.