Trends |

Arthur I. Miller | The Artist in the Machine28.10.2019

illustration: Patryk Sroczyński, animation: Paweł Szarzyński

Today, computers are creating an extraordinary new world of images, sounds, and stories such as we have never experienced before. Gerfried Stocker, the outspoken artistic director of Ars Electronica in Linz, says provocatively, “Rather than asking whether machines can be creative and produce art, the question should be, ‘Can we appreciate art we know has been made by a machine?’”

Excerpted and adapted from “The Artist in the Machine: The World of AI-Powered Creativity” by Arthur I. Miller  (The MIT Press, 2019).

Alexander Mordvintsev’s DeepDream sees things we don’t and conjures up images merged in extraordinary and, to the human eye, sometimes nightmarish ways. Ian Goodfellow’s generative adversarial networks (GANs) provide a way for computers to assess their creations without human intervention. As he puts it, they give AI a form of imagination. Computer scientist Ahmed Elgammal has used them to evolve his creative adversarial network (CAN) in the quest to create art not only definitively new but appealing to human eyes. Pix2Pix, creating fully developed images from an outline, and CycleGAN, merging two photographs, have created images never seen or even imagined before.

Throughout history, when an artist arises who breaks boundaries, the resulting art can’t be classified within established styles and sparks a brand-new school, as Picasso did with cubism. Computer art too does not fit within any of the traditional styles. It pushes forward the frontiers of art. “Computers are changing the way human artists paint,” says Alberto Barqué-Duran, an artist and performer who uses artificial neural networks in his work. And indeed as from 2019 a new category has been added to the prestigious Prix Ars Electronica directed by Gerfried Stocker: Artificial Intelligence and Life Art.

The ultimate goal of all those working on machine-generated art, literature, and music is to create work beyond any known human genre or human imagining.

In music, Project Magenta created the first melody composed by a computer that had not been programmed in any way to do so. Artificial neural networks, such as Magenta’s NSynth, explore new sonic vistas, producing sounds never heard before. There is a huge difference between music created by machines under their own steam (end to end) and music created by computers which have been programmed to do so (rule based). At the moment, the rule-based approach produces melodic music of complex structure and is more efficient at helping musicians play and compose music. For now, it produces music akin to the music it has learned. But the very complexity of the music it creates could point the way toward new approaches to composing, perhaps even toward music that human beings could never dream up. Indeed, the ultimate goal of all those working on machine-generated art, literature, and music is to create work beyond any known human genre or human imagining.

Computer-generated literature is even more of a frontier. The question of whether and how machines can have and express emotion throws the problems into stark relief. Most difficult of all is the complex human facility of humor. Even at the most basic level, like knock-knock jokes, machines don’t know they are making a joke. They don’t have awareness, though this does not detract from the fact that they sometimes do something charming and unexpected which—to human eyes—seems to hint at a personality, such as when the AI that wrote the script for the film Sunspring suddenly said, “My name is Benjamin.”

Tony Veale’s Scéalextric algorithm produces quite sophisticated stories thanks to its depth and the number of words at its disposal. It even at one point made a leap of imagination, conflating the characters Frank Underwood and Keyser Söze.

For now, machines that are programmed tend to generate more sophisticated plots and stories than those created by more autonomous artificial neural networks. Tony Veale’s Scéalextric algorithm produces quite sophisticated stories thanks to its depth and the number of words at its disposal. It even at one point made a leap of imagination, conflating the characters Frank Underwood and Keyser Söze, both of whom were played by the same actor, Kevin Spacey. It jumped its own system.

Most programmed—rule-based—systems have constraints to prevent them from producing nonsense, but artificial neural networks generate poetry and prose that frequently passes over into that realm, such as the script for Sunspring and the image-inspired poetry of Sunspring’s creator, Ross Goodwin’s word.camera.

Poets like Nick Montford and Allison Parrish use algorithms to tread the fine line between sense and nonsense in their explorations of semantic space, the space of meaning. Parrish looks into the question of what nonsense actually is. Is a word nonsensical simply because we’ve never heard of it? With the help of computers, they are able to expand our horizons, our sense of what is and is not acceptable and interesting.

The great stumbling block is that computers cannot appreciate the art and music they themselves produce.

How we interpret such gnomic prose can provide hints as to how we will respond to computer-generated prose of the future, prose written by an alien life-form. In the future, we can expect computers to produce literature different from anything we could possibly conceive of. Our instinct is to try to make sense of it if we can. But when a new form of writing appears, generated by sophisticated machines, we may not be able to. As we learn to appreciate it, perhaps we will even come to prefer machine-generated literature.

For the moment, the great stumbling block is that computers cannot appreciate the art and music they themselves produce and are unaware of the quality of the moves they make in chess and Go. Basically, they lack awareness.

Creativity in Humans and Machines

How are we going to recognize computer creativity? Most people would argue that the only way we know is by comparison with our own. We can only program our computers according to how we think and how our own creativity works. When machines reach our level of creativity, they will be able to develop creativity of their own—creativity that at present we are not equipped to imagine.

Douglas Eck, head of Project Magenta at Google, argues that it is a mistake to divide the world into human and AI, to think that we need to understand human creativity before we can understand machine creativity. To do so, he contends, is akin to composer and music theorist Fred Lerdahl and linguist Ray Jackendoff’s argument when they proposed a generative theory of music along the lines of Noam Chomsky’s universal generative grammar, a collection of empty forms that accumulate content through hearing speech.

Lerdahl and Jackendoff’s proposal in 1983 was for a set of structures for the way musical notes are grouped, possible transitions between them, their metrical structure and time span, and so forth. They claimed that these structures formed a sort of musical grammar in the unconscious, which we then apply to illuminate the structure of particular pieces, adding that uncovering and understanding these unconscious structures was a prerequisite for listening to and playing music.

Eck entirely disagrees with this approach. He argues that to try to “understand music first structurally so that then we can understand musical timing and performance is wrong because they are so intertwined.” Similarly, on the question of whether we need to understand human creativity before we can even start to examine machine creativity, he says, “Creativity has always been embedded in culture and so has technology. To force this factorization—first understanding human creativity independent from the rest of the world so that then we can understand how it all mixes—is completely missing the point.” For Eck, “technology is providing us with AI and AI has created things that are beautiful, and so we start to care differently about its creations. We should not say, hold on, let’s try to understand human creativity before machine creativity.”

The question is, what goes on in the computer’s brain? What goes on in the hidden layers, the seat of the machine’s reasoning power?

Margaret Boden, research professor of cognitive science at the University of Sussex, was one of the first to suggest that computer programs could be related to the way the human mind works. To recap, she suggests three criteria to assess if an idea or an artifact is the product of creativity: that it should be novel,valuable, and surprising. 8 ProjectMagenta’s ninety-second melody could be said to show creativity according to these criteria.

But is this all there is to creativity, both for us and for computers? Boden’s criteria focus on product rather than process. Particularly in the case of computers, the process of creativity is of great importance. The question is, what goes on in the computer’s brain? What goes on in the hidden layers, the seat of the machine’s reasoning power? We can see the results that emerge from them, but we have yet to understand how they work. The mystery of the hidden layers and what goes on there was the catalyst that inspired Alexander Mordvintsev to invent DeepDream, which was itself a step forward in understanding them.

000 Reactions

Emeritus Professor of History and Philosophy of Science at University College London. He is the author of „Colliding Worlds: How Cutting-Edge Science is Redefining Contemporary Art” and other books including „Einstein, Picasso: Space, Time, and the Beauty That Causes Havoc”.

see also

discover playlists