This research blog is a collection of thoughts, internet fragments and pieces of information I find noteworthy. In general, you can expect posts about technology, politics, art and design.

For weekly updates, follow me on Facebook or subscribe to the Atom/RSS.

Machine Learning Neural Network computer generated

Mike Tyka's “Portraits Of Imaginary People” is an experiment, where he is looking for new ways to use generative neural networks to make portraits of, well, as the title suggests, imaginary people. His approach combines multiple networks in different stages. The actual generation of the faces is restricted to a resolution of roughly 256 × 256 Pixels. In order to overcome this technical restriction of conventional neural networks, he upscales the output into higher resolution using multiple stages of machine learning methods, achieving printable pictures with a resolution of up to 4000 × 4000 Pixels. The aesthetic of the actual outcome is rough, haptic and has its very own quality, sometimes evoking associations with oil paintings or surrealism. Two things are to note: This is still a work-in-progress, an experiment with uncertain outcomes and the results are highly cherry-picked. Visit his page for more information.

On a side note: The results reminded me of a project called “Composite” by Brian Joseph Davis, where he generated police sketches of literary characters, by running their book description through composite sketch software used by law enforcement. Seeing the results and the implications of Tyka's experiments, you can see that law enforcement is also going to be changed by machine learning, computer vision and neural networks.

Machine Learning Neural Network Musicvideo computer generated

This project by Damien Henry is an hour-long video set to music by Steve Reich. What you are seeing is not a style or filter applied to a video, but actually new footage generated by a neural network. This neural network is trained to videos recorded from train windows, with landscapes that moves from right to left. The algorithm uses a motion-prediction technique, basically it is trying to predict the next frame of the video. After training the neural network, it only needs one frame as an input to start generating new frames indefinitely.

Eerily enough, the predicted footage is capturing the feeling of riding a train pretty well. Even though the landscapes are more dreamlike than realistic, it is interesting enough that the algorithm itself figured out, what makes a train ride a train ride: For example that the background has to move slower than the foreground. It is important to note, that the resolution is currently pretty low due to the technical restriction neural networks have. But you can expect an increase in resolution and quality of such experiments in a not so distant future. Machine learning is still in children's shoes and engineers, artists and coders are trying to figure out, how they actually work and what they can and cannot do. I guess, now you have to cross "dreaming of train rides" off that list.