This research blog is a collection of thoughts, internet fragments and pieces of information I find noteworthy. In general, you can expect posts about technology, politics, art and design.

For weekly updates, follow me on Facebook or subscribe to the Atom/RSS.

Typography Machine Learning Article Graphic Design

Font pairing is a classic part of the typography concept during the design process. Different font pairings have different effects on how the content is presented, they can draw or deter attention, express personality, shape an identity or attitude, and of course improve legibility and user experiences. Important factors to aid your decision are visual contrast, various typographic measures and features (like x-height and ascenders/descenders) and the history and origin of the fonts. The developer of Fontjoy tried to quantify and analyse, what generally makes a good font pairing: "Good font combinations tend to be fonts that share certain similarities, but contrast in some specific way.". As highlighted multiple times in this blog, neural networks are great at finding similarities and correlations in big data sets. For Fontjoy, the developers used machine learning to analyse more than 1800 fonts, the algorithm itself identifies the features and orders them in a multi dimensional grid and outputs font pairings based on the developers definition.

Of course, this interests me, because it actively enters my own domain as a graphic designer and in a way, questions my own creativity and decision making. Fontjoy is not the only AI driven and design focused tool out there, it is clear, that a trend in AI assisted design features is emerging. Take Wix for example, one of the more popular website building tools, uses an algorithmic approach to make it easy for amateurs to build websites that are pleasing to the eye. Wix feeds the algorithm high-quality websites and tries to make style suggestions relevant to the client’s industry. Firedrop.ai is able to generate landing-pages with an AI Assistant called Sacha, you actually write your changes and desired features in a chat, Sacha talks back and delivers. Autodesk Dreamcatcher is able to generate thousands of iterations and alternative design solutions for industry designers and CAD users. The Grid is a paid service, that offers "websites, that design themselves". LogoJoy claims to generate logos "you will be proud of".

So that's it, right? Graphic design will be a obsolete in the future, a fragment of the past like a pixelated photograph of a Blockbuster store you took on your rad new flip-phone.

Well, the answer is not a clear yes or no. While it is somehow true, that the design process can often be reduced to variables, to inputs and a desired outputs, the reality is, as so often, more complex. Probably not so complex, that a algorithm could not grasp it, distill its essence and reflect it like a mirror, but what I mean is the history and development of design as a cultural phenomena, as an expression of the zeitgeist and society. In that context, I think that we are in a transition phase, we will look back at graphic design in the 2010's as we look back to the days typesetting was done by hand, either through phototypesetting or hot metal typesetting. Maybe we will be nostalgic about how back in the days, we actually did mock-ups by hand in Photoshop or when we had to apply and define 200 pages of branding guidelines for every medium manually.

I learned in university how to set type by hand using hot metal typesetting not only to get a better understanding of the origin of the technical terminology in inDesign, but also to grasp why typesetting works the way it does and why certain dogmas are valid and what it means if you break them. So I can imagine a future, where students have to, for example, write stylesheets for different screen sizes "by hand", just to understand how and why the program/app/digital assistant/algorithm/[...] acts the way it does. And that's a good thing. Understanding the tools, the "why and how" behind the GUI or machine, leads us to the "why not and how else", to experiments, new ways of expression and better suitable solutions to unique questions. And ultimately back to a mirrored zeitgeist through graphic design.

To stay in the example of typesetting, we can look at desktop publishing (DTP), that replaced phototypesetting with a digital equivalent in form of layout programs. In matter of years, the job of a graphic designer changed, layout work that used to take hours could now be done in minutes, with instant visual feedback. The new tools did not only push productivity and reduced costs, but opened new ways of expression. For example, an explosion of new typefaces hit the market and a new aesthetic developed. Take Emigre, Neville Brody or David Carson, all shaped the zeitgeist of the 90s with their aesthetic. That development is eventually tied closely to the technical possibilities of that time, because the tools that are deeply connected to the aesthetic, well, simply did not exist before.

That is why I am optimistic about AI assisted design, it will be a powerful new addition to the designers toolbelt, able to free us from mundane tasks. I think that we should cherish this phase of transition as what it is: A possibility for something new. Algorithms, automation and AI assisted design will change the job of a graphic designer, once again, productivity will rise while costs will decrease. At the same time, new challenges, demands and problems will emerge, but so will new solutions, applications and ideas. The examples above still may signal a demise of graphic design, but for me they are the equivalent of fast food, an instant and short lived gratification. LogoJoy is for the eye, what a Big Mac is for the stomach. For the everyday user, this will suffice, but, well, so did WordArt.

more information:
The automation of design, by Kai Brunner (Techcrunch)
Algorithm-Driven Design: How Artificial Intelligence Is Changing Design, by Yury Vetrov (Smashing Magazine)
Taking The Robots To Design School, Part 1, Great read by Jon Gold, who worked at The Grid

3D

This recent project by Raphaël Fabre is currently dripping through all blogs and I decided to archive this fragment here as well. With computer software and techniques used for special effects in the music and video game industry, he modeled a photorealistic 3D portrait of himself and submitted it as his photo for his official french ID card. The photo was accepted, because "the image corresponds to the official demands for an ID: it is resembling, is recent, and answers all the criteria of framing, light, bottom and contrasts to be observed". For me, the interesting part is not necessary that it got accepted at all, because who can blame the authorities for not seeing through the charade, when the purpose of the 3D model was to look actually photorealistic. But I like this project, because it only took one conceptional shift, to transform the meaning of photorealistic CGI. By officially acknowledging the image, the meaning of the 3D model transcends, it marks and illustrates the merging of virtuality and reality and thus, elevating and commenting our interlaced, modern identities. Since photorealistic CGI has been around for decades, I am genuinely surprised that it took so long that someone tried that.

Machine Learning Neural Network computer generated

Mike Tyka's “Portraits Of Imaginary People” is an experiment, where he is looking for new ways to use generative neural networks to make portraits of, well, as the title suggests, imaginary people. His approach combines multiple networks in different stages. The actual generation of the faces is restricted to a resolution of roughly 256 × 256 Pixels. In order to overcome this technical restriction of conventional neural networks, he upscales the output into higher resolution using multiple stages of machine learning methods, achieving printable pictures with a resolution of up to 4000 × 4000 Pixels. The aesthetic of the actual outcome is rough, haptic and has its very own quality, sometimes evoking associations with oil paintings or surrealism. Two things are to note: This is still a work-in-progress, an experiment with uncertain outcomes and the results are highly cherry-picked. Visit his page for more information.

On a side note: The results reminded me of a project called “Composite” by Brian Joseph Davis, where he generated police sketches of literary characters, by running their book description through composite sketch software used by law enforcement. Seeing the results and the implications of Tyka's experiments, you can see that law enforcement is also going to be changed by machine learning, computer vision and neural networks.

Augmented Reality

This demonstration from Behringer shows a mixed reality interface for real-time music manipulation. It utilizes Microsofts Hololens for displaying information in an augmented reality environment. The hand movements and position are tracked by a Leap Motion and the data is fed to the Deepmind12 synthesizer. The demonstration above still seems a bit clunky here and there, and the actual music manipulation seems partially edited, but still, the implications of the technology does stand to reason and teases my curiosity. I think music production and live performances are great fields for applying mixed reality interfaces, since it has inherently a haptic and sensual workflow, adding a new layer of interaction can benefit the creative process. Since it still happens in a digital realm, you have the possibility to hack, customize and modify the input and outputs of that workflow. An example of that would be the online course on Kandenze, where Rebecca Fiebrink of the Goldsmiths University of London shows ways of modifying sensors and inputs with the means of machine learning to basically make custom instruments, e.g. for live performances or experimental music production.

Augmented Reality

SketchAR is an application through which the user sees a virtual image on the surface of which they are planning to trace a sketch. It is designed to help people who have always wanted, but could not draw. This iteration of the App is using depth sensors that currently only a few special phones have. These sensors are being developed by Google under the name Project Tango and enable a mobile device to become aware of its physical location in the world. SketchAR uses this new spatial information in combination with augmented reality for large scale drawings, stencils and graffiti.