Computers can’t think; they do not reason on their own. Your mind is not a computer and your computer is not a mind. Engineers of ubiquitous computing platforms are determined to convince us otherwise. For many of them, artificial general intelligence—the point at which computers will exceed the intellectual capacity of humans—is just around the corner. A cadre of techno-philic artists follow closely on their heels. But their claims have been greatly oversold. Few of these brave Futurists are able to ponder the deeper problem involved. Our minds crave narrative. Stories are how we make sense of an otherwise blank reality. If we are to live alongside artificial intelligence (AI), how might that bear on the narratives we use to make meaning of our world?
The American artist Ian Cheng knows computers can’t think. For several years, he has drawn on his study of cognitive science and his work with the special effects company Industrial Light and Magic to make work about human immersion in technology. His trilogy Emissaries (2017)—an open-ended, animated simulation with no pre-determined ending—is about the evolution of cognition. (The work is on show now at MoMA PS1 in New York.)
In each stage, animated characters build their own fictitious world with the aid of a network of AI algorithms. Simply put, it is a video game playing itself. The on-screen actions of the characters may seem unpredictable, but they are not random. They grow from highly-patterned learned outputs of the same tools that categorise images, translate texts or recommend Amazon products. The result is an epic creation myth in which an artificial “mind” evolves in an attempt to arrive at sentience.
In part one, Emissary In The Squat of the Gods, we see an ancient volcano nurturing a small community on the cusp of civilization. The full story is detailed in wall texts; onscreen the simulation is chaos: explosions in the distance, strange voices blurting out commands. A Shaman and snake-boy gather around a totem known as the Holy Fumerole. Other characters shift about. A young girl is hit in the head by volcanic debris, which shakes her from the spell of the voices that bind the community. With the help of an owl, she breaks away.
The next episode, Emissary Forks at Perfection (presented in another gallery), picks up the simulation "many lifetimes later." The setting is a crater lake formed by the volcanic eruption in the first episode. Here, AI surveys the last vestiges of human life amidst a landscape populated by Shiba Inu dogs.
In the last section, The Emissary Sunsets the Self, we find that the crater lake has given way to a “sentient” atoll. This is the final attempt of the AI to learn by “droning,” whereby it experiences the sensations and habits of a biological organism. When I was there on just one day of an endless simulation, it looked something like a Middle Eastern desert. An AI Puddle emissary (basically a worm) was spinning incessantly into the side of a dune.
But the narrative details of any one episode are not essential because the fantastic plot of the simulation is impossible to follow. Its sophistication exceeds the limits of human perception. It is dizzying, logically and aesthetically. Within the first few minutes, anyone will come to terms with the fundamental contradiction at play: that the character’s stated goals are purposefully interrupted by the machine’s learning. Every moment of the work is a reminder of the fundamental incompatibility of human cognition with a machine’s attempt to artificially replicate it. Emissaries, in short, is a large-scale conflict between the narrative character elements and the computer that diverts it. There is never any resolution. In fact, as you read this, the plot is still unfolding somewhere on the internet.
Ian Cheng, Emissary in the Squat of Gods (live simulation and story, infinite duration, 2015)
Ian Cheng, Emissary Forks at Perfection (live simulation and story, infinite duration, 2015-2016)
Cheng’s is adept at using industry tools to create a compelling cinematic experience. The production is professional, like a good video game; your senses are stimulated—and this is just the concern. Cheng’s immediate goals may be aesthetic, but the ideology that drives the production—in which a machine-driven civilization develops consciousness from primordial soup—makes claims far beyond mere entertainment. It provides a testing ground for the larger idea that human life and its social order have been superseded by mechanic intelligence.
Cheng makes another disturbing statement with Emissaries. Simulation, he says, is best applied when a system has too many possible dimensions for the human mind to create a narrative. Fair enough. But he goes further: “A simulation has no moral, prejudice, or meaning. Like nature, it just is.” Yet we know that all machine learning involves thousands of human decisions. Even unsupervised neural networks (which are patterned on the brain) have a history of development and implementation that bear the marks of human institutions. To say there are no morals in the field of AI is a dangerous calculation. Emissaries itself already contradicts the claim that AI is an emergent property born of natural laws. The structure Cheng imposes on his simulation is proof that complex systems can never be truly autonomous. Algorithms are human-made.
Emissaries illustrates the central of folly of the computational age: that no amount of mathematical modeling will ever explain or reproduce consciousness. We can know what activity the brain appears to trigger, and we can even closely approximate behaviour through computation. But every simulation lacks the spontaneity that comes with human creativity since, by definition, the simulation must rely on central and standardised inputs. But there is no such thing as a sandardised mind. We will never reproduce the distributed subjectivity of human consciousness. Machine learning algorithms can model complex logic, but they can never explain the human condition quite the way art does.
All cultures use creation myth. They are powerful literary tropes and political tools that structure how we see our societies and ourselves. Cheng’s trilogy serves that purpose: it's a creation myth for the belief that AI might also have unconscious desires that are not unlike our own. What separates Emissaries from just another elaborate video game is that Cheng’s installation, when staged at MoMA PS1, claims a close correspondence between computational intelligence and cultural narrative. The central rhetorical device of the show is that AI assumes both the tradition of the epic and the exhibition format of the museum.
The purpose of AI isn’t to better understand human cognition; it's meant to resemble and replace cognition for our "post-human" economy. It follows that Cheng’s AI trilogy doesn’t tell us much about the “why” of human consciousness. Instead, Emissaries tempts us to believe that humanity, in all its epochal development, evolved through a data process; of course, it's much more complicated than that. Paradoxically, Cheng lays out the stakes for art in the age of computation: despite the hype around AI, it will always only stare us blankly in the face. AI has not yet approached self-sentience. And even when it does, we will still need our narratives. It is much easier to model human cognition than to explain it.
• Mike Pepi is a writer living in New York
• Iang Cheng, MoMA PS1, New York, until 25 September