Machine Learning Developer Uses AI to Create Creepy Melancholy Glitch-Art Music Videos

A machine learning developer Jeff Zitto, has made a series of music using Face2Face, an AI that was originally meant to generate realistic images. But here, Zitto took the project to a different direction.

"The intention was to create art, absolutely. Training these networks with hi-def images takes days on the cloud, which unfortunately is not free, so there’s not a lot of room to experiment in a purposeless way."

"We had a few unsuccessful attempts, which in this backwards world means producing content that’s too accurate and sterile, before we started to understand what kind of content to use and how to utilize it effectively."

"We use image-to-image translation, as outlined first in the pix2pix paper, to create a model from existing footage (like an interview with Italian philosopher Julius Evola or ‘Human Ken Doll’ Rodrigo Alves) and allow the performer to control and express themselves, both through the body and appearance of another person and through the fragmented and distorted perception of a slightly sick neural network."

Here, by fiddling with its network controls to introduce less-than-needed parameters, Zitto was able to generate creepy melancholy glitch-art music videos.

Jeff Zitto has proven that AI and deep learning technologies aren't the domain of Microsoft, Facebook, Google or other high profile tech companies.

Anyone can use and create AIs, based on personal hobby, or artistic endeavor, and can still provide valuable contribution to the AI community as a whole. And when someone experimented on AI in a different way, it certainly helps others to better understand how AI works.

The music behind the project was created by Lord Over, an artist that was described by Zito as “a reclusive, somewhat shy artist.”

And here, Zitto said that "we were looking for a way to obscure their face while allowing them to perform and emote as they might in real life."

Published: 
22/01/2018