This research blog is a collection of thoughts, internet fragments and pieces of information I find noteworthy. In general, you can expect posts about technology, politics, art and design.
This recent project by Raphaël Fabre is currently dripping through all blogs and I decided to archive this fragment here as well. With computer software and techniques used for special effects in the music and video game industry, he modeled a photorealistic 3D portrait of himself and submitted it as his photo for his official french ID card. The photo was accepted, because "the image corresponds to the official demands for an ID: it is resembling, is recent, and answers all the criteria of framing, light, bottom and contrasts to be observed". For me, the interesting part is not necessary that it got accepted at all, because who can blame the authorities for not seeing through the charade, when the purpose of the 3D model was to look actually photorealistic. But I like this project, because it only took one conceptional shift, to transform the meaning of photorealistic CGI. By officially acknowledging the image, the meaning of the 3D model transcends, it marks and illustrates the merging of virtuality and reality and thus, elevating and commenting our interlaced, modern identities. Since photorealistic CGI has been around for decades, I am genuinely surprised that it took so long that someone tried that.
Mike Tyka's “Portraits Of Imaginary People” is an experiment, where he is looking for new ways to use generative neural networks to make portraits of, well, as the title suggests, imaginary people. His approach combines multiple networks in different stages. The actual generation of the faces is restricted to a resolution of roughly 256 × 256 Pixels. In order to overcome this technical restriction of conventional neural networks, he upscales the output into higher resolution using multiple stages of machine learning methods, achieving printable pictures with a resolution of up to 4000 × 4000 Pixels. The aesthetic of the actual outcome is rough, haptic and has its very own quality, sometimes evoking associations with oil paintings or surrealism. Two things are to note: This is still a work-in-progress, an experiment with uncertain outcomes and the results are highly cherry-picked. Visit his page for more information.
On a side note: The results reminded me of a project called “Composite” by Brian Joseph Davis, where he generated police sketches of literary characters, by running their book description through composite sketch software used by law enforcement. Seeing the results and the implications of Tyka's experiments, you can see that law enforcement is also going to be changed by machine learning, computer vision and neural networks.
This demonstration from Behringer shows a mixed reality interface for real-time music manipulation. It utilizes Microsofts Hololens for displaying information in an augmented reality environment. The hand movements and position are tracked by a Leap Motion and the data is fed to the Deepmind12 synthesizer. The demonstration above still seems a bit clunky here and there, and the actual music manipulation seems partially edited, but still, the implications of the technology does stand to reason and teases my curiosity. I think music production and live performances are great fields for applying mixed reality interfaces, since it has inherently a haptic and sensual workflow, adding a new layer of interaction can benefit the creative process. Since it still happens in a digital realm, you have the possibility to hack, customize and modify the input and outputs of that workflow. An example of that would be the online course on Kandenze, where Rebecca Fiebrink of the Goldsmiths University of London shows ways of modifying sensors and inputs with the means of machine learning to basically make custom instruments, e.g. for live performances or experimental music production.
SketchAR is an application through which the user sees a virtual image on the surface of which they are planning to trace a sketch. It is designed to help people who have always wanted, but could not draw. This iteration of the App is using depth sensors that currently only a few special phones have. These sensors are being developed by Google under the name Project Tango and enable a mobile device to become aware of its physical location in the world. SketchAR uses this new spatial information in combination with augmented reality for large scale drawings, stencils and graffiti.
Together with Monotype, Google engineered a universal typeface family that spans more than 800 languages, 100 writing systems and hundreds of thousands of characters. The name "Noto" is an abbreviation of “no more tofu”. Tofu, in this case, is a nickname for the blank boxes (▯) when a computer or website lacks font support for a specific character or letter. The amount of work, research and dedication is breathtaking, I am surprised that the team realized this mammoth project in only five years. Some of the languages in this typeface family had been never digitised before, they are niche languages which only existed in spoken form or are found mostly on monuments and manuscripts. For example Adlam, a writing system for the fulani language of Afrika, Monotype worked with the script’s original creators. Having direct access to the inventors of this writing system allowed the designers to incorporate stylistic choices and features that would reflect the creators’ original intentions, and bring the Fulani-speaking community the first chance to use the script digitally.
This cultural preservation is what I love the most about this project. Some of the typefaces and writing systems would probably be forgotten, so the font family serves as a kind of contemporary digital Tower of Babel. The fact that the whole project is open source, free to use and constantly expanding is a great example of how graphic design has the possibility to connect mankind, democratizes communication and preserve culture and tradition in our digital age.