TL;DR
- MoMA’s newest digital art exhibit is an AI trained on 180,000 works of art ranging from Warhol to Pac-Man.
- Created by pioneering artist Refik Anadol, the installation in the museum’s Gund Lobby uses a sophisticated machine-learning model to interpret the publicly available visual and informational data of MoMA’s collection.
- The installation has prompted speculation on the nature of art and what it tells us about a machine’s “dream-state.”
READ MORE: Modern Dream: How Refik Anadol Is Using Machine Learning and NFTs to Interpret MoMA’s Collection (MoMA)
MoMA is exhibiting a new digital artwork that uses artificial intelligence to generate new images in real time.
The project, by artist Refik Anadol and titled Refik Anadol: Unsupervised, uses 380,000 images of 180,000 art pieces from MoMA’s collection to create a stream of moving images.
“It breathes,” Fast Company’s Jesus Diaz marvels, “like an interdimensional being… this constant self-tuning makes the exhibit even more like a real being, a wonderful monster that reacts to its environment by constantly shapeshifting into new art.”
To be fair, Diaz was being shown around by the artist himself, who says he wanted to explore how profoundly AI could change art. In an interview for the MoMA website, alongside Michelle Kuo, Paola Antonelli and Casey Reas, Anadol shares, “I wanted to explore several interrelated questions: Can a machine learn? Can it dream? Can it hallucinate?”
To which the answer is surely no. But if nothing else, Unsupervised has succeeded as art should in feeding the imagination.
The display is “a singular and unprecedented meditation on technology, creativity, and modern art” which is focused on “reimagining the trajectory of modern art, paying homage to its history, and dreaming about its future,” MoMA states in a press release.
READ MORE: Refik Anadol: Unsupervised (MoMA)
The work is described by Anadol as a “Machine Hallucination” that brings a “self-regenerating element of surprise to the audience and offers a new form of sensorial autonomy via cybernetic serendipity.”
READ MORE: Unsupervised — Machine Hallucinations — MoMA (Refikanadol)
To understand what Unsupervised means, you have to understand the two main methods with which current AIs learn: Supervised AIs — like OpenAI’s Dall-E — are trained using data tagged with keywords. These keywords allow the AI to organize clusters of similar images and, when prompted, will generate new images based on what it learned.
In this case, the AI was left to make sense of the entire MoMA art collection on its own, without labels. Over the course of six months, the software created by Anadol and his team was fed 380,000 high-resolution images taken from more than 180,000 artworks stored in MoMA’s galleries, including pieces by Pablo Picasso, Andy Warhol and Gertrudes Altschul.
The team created and tested various AI models to see which one produced the best results, then picked one and trained it for three weeks.
Crafting the neural network and building the training model to create Unsupervised is only half of the story.
To generate each image in real time, the computer constantly weighs two inputs from its environment. Diaz explains that it references the motion of visitors, captured by a camera set in the lobby’s ceiling. It then plugs into Manhattan’s weather data, obtained by a weather station in a nearby building.
“Like a joystick in a video game, these inputs push forces that affect different software levers, which in turn change affect how Unsupervised creates the images,” Diaz describes.
The results probably need to be experienced before judgement can be passed.
“AI-generated art has arrived,” says Brian Caulfield, blogging for NVIDIA, whose StyleGAN forms the basis for Anadol’s AI.
READ MORE: MoMA Installation Marks Breakthrough for AI Art (NVIDIA)
“Refik is bending data — which we normally associate with rational systems — into a realm of surrealism and irrationality,” Michelle Kuo, the exhibit’s curator, explains to Zachery Small at The New York Times. “His interpretation of MoMA’s dataset is essentially a transformation of the history of modern art.”
READ MORE: Even as NFTs Plummet, Digital Artists Find Museums Are Calling (The New York Times)
In his interview for MoMA, Anadol even has the chutzpah to compare his work to breakthroughs in photography.
“Thinking about when William Henry Fox Talbot invented the calotype, and when he was playing with the early salt prints, pigmentation of light as a material — working with AI and its parameters has very similar connotations: the question of when to stop the real, or when to start the unreal.”
For example, Unsupervised is able to draw on the vast array of digital representations of color from artworks on which it was trained, and from that, it seems, play back colors of its own.
Anadol imagines looking at historic paintings like Claude Monet’s Water Lilies, and remembering their richness and personality. Now imagine the data set based on these works, one that considers every detail that your mind cannot possibly hold.
“Because we know that that EXIF [exchangeable image file format] data that takes the photographic memory of that painting is in the best condition we could ask for,” Anadol comments. “I think that pretty much the entire gamut of color space of Adobe RGB most likely, exists in MoMA’s archive. So we are seeing the entire spectrum of real color but also the machine’s interpretation of that color, generating new colors from and through the archive.”
Speaking to Diaz at Fast Company, David Luebke, vice president of graphics research at NVIDIA, says simply, “Unsupervised uses data as pigment to create new art.”
READ MORE: MoMA’s newest artist is an AI trained on 180,000 works, from Warhol to Pac-Man (Fast Company)
Digital artist and collaborator Casey Reas offers another perspective for how we should think about an AI, rather than it somehow being conscious.
“What I find really interesting about the project is that it speculates about possible images that could have been made, but that were never made before,” Reas says. “And when I think about these GANs, I don’t think about them as intelligent in the way that something has consciousness; I think of them the way that the body or even an organ like the liver is intelligent. They’re processing information and permuting it and moving it into some other state of reality.”
Anadol and the exhibit curators would have us think that the art world is in a new “renaissance,” and that Unsupervised represents its apex.
“Having AI in the medium is completely and profoundly changing the profession,” the artist noted. It’s not just an exploration of the world’s foremost collection of modern art, “but a look inside the mind of AI, allowing us to see results of the algorithm processing data from the collection, as well as ambient sound, temperature and light, and ‘dreaming.’ ”
Of course, this is only the tip of the iceberg. Much more is coming. Modern generative AI models have shown the capability to generalize beyond particular subjects, such as images of human faces, cats, or cars. They can encompass language models that let users specify the image they want in natural language, or other intuitive ways, such as inpainting.
“This is exciting because it democratizes content creation,” Luebke said. “Ultimately, generative AI has the potential to unlock the creativity of everybody from professional artists, like Refik, to hobbyists and casual artists, to school kids.”
AI ART — I DON’T KNOW WHAT IT IS BUT I KNOW WHEN I LIKE IT:
Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art:
- What Will DALL-E Mean for the Future of Creativity?
- Recognizing Ourselves in AI-Generated Art
- Are AI Art Models for Creativity or Commerce?
- In an AI-Generated World, How Do We Determine the Value of Art?
- Watch This: “The Crow” Beautifully Employs Text-to-Video Generation