If computers have eyes, would they appreciate how humans look? While AI has already shown a great potential in voice recognition, apparently, it can also do well in creating visual arts.
Mike Tyla, a Google scientist that has always been fascinated by molecules, took a different approach in art: combining AI and machine learning to create digital portraits of non-existent people.
According to Tyka:
The obvious problem Tyka found was dealing with resolution and fine details. For starters, the receptive field of these networks is usually at less than 256x256 pixels. To deal with this, he stacked GANs to create higher resolution. He also trued a similar approach, like increasing the resolution of the GAN-generated faces to 768x768 and as far as 4K x 4K.
As a result, the images are more crisp in details. Below are some of the examples:
Tyka doesn't mind if the results are not realistic. His focus was creating fine texture. According to him, that is more important no matter what even it it's surreal.
But in order to make that happen, he need to deal with mode collapse and poor controllability of the results. And he need to also eliminate the amoung of unnecessary artifacts. This happened especially in the second-stage of GAN where the meta stable between smooth skin and hairy skin happens.
As for this work, Tyka used vanilla GANs. He is thinking to use WGAN, CramerGAN or BEGAN as an added chance to make results better, and stated that the works are in progress.