The emergence and constant refining of Generative Adversarial Networks (GANs) have generated impressive artworks in various styles through AI. A Princeton undergrad student, Alice Xue, has recently designed a GAN framework for Chinese landscape painting generation that is so effective that its work can’t be distinguished from the real thing.
According to this paper, the proposed framework, namely, Sketch-And-Paint GAN (SAPGAN), is the first end-to-end model for Chinese landscape painting generation without conditional input. Around 242 participants in a visual Turing test identified SAPGAN paintings as human artworks with a substantially better frequency than artwork from baseline GANs. Xue explains that popular GAN-based art generation methods like style transfer rely too heavily on conditional inputs. However, models dependent on conditional input have limited generative capability as their images are built on a single, human-fed input. This dependence implies that they can only produce derivative artworks that are nothing but stylistic copies of the conditional input.