Chapter 16: The Image Generators

In 2022, machines began making images that rivaled human artists—trained on those same artists' work, often without their knowledge, never with their ...

Chapter 16: The Image Generators

In 2022, machines began making images that rivaled human artists—trained on those same artists' work, often without their knowledge, never with their consent. The diffusion revolution was both technical breakthrough and cultural rupture, and the rupture proved harder to repair than anyone anticipated.

The technology emerged from a mathematical insight: instead of generating images directly, you could train a neural network to remove noise. Start with pure static, then iteratively refine it, guided by text prompts, until recognizable images emerged from chaos. The process was called diffusion, and it produced results that previous approaches could not match. Photorealistic faces. Fantastical landscapes. Images in the style of specific artists, conjured in seconds.

DALL-E, developed by OpenAI, demonstrated the possibilities. Midjourney, a small company run by a former NASA engineer, built a community around the technology on Discord; Stable Diffusion, released by Stability AI in August 2022, made the capability open: anyone could download the model, run it on their own hardware, generate whatever they imagined.

The images that emerged were beautiful, and they were built on theft.


🎧

Karla Ortiz had spent years developing her craft. A concept artist based in San Francisco, she had worked with Marvel on Guardians of the Galaxy, Loki, The Eternals, Black Panther, Avengers: Infinity War, Doctor Strange. She had contributed to Final Fantasy 16, to Magic: The Gathering cards that collectors treasured. Her distinctive style, the product of countless hours of study and practice, was recognizable to anyone who followed her work.

In late 2022, she discovered that style had been absorbed. Users were generating images "in the style of Karla Ortiz" using prompts that invoked her name. The AI had learned from her work: scraped from the internet, ingested into training datasets, transformed into mathematical weights that could reproduce her aesthetic without her participation.

She was not alone. Sarah Andersen, a cartoonist whose gentle illustrations had been stripped of their context. Kelly McKernan, whose ethereal paintings now existed as statistical patterns in a machine. Thousands of artists, discovering that their life's work had become raw material for systems that would compete with them.

The artists demanded three things: credit, consent, and compensation. They had received none.

In January 2023, Ortiz, Andersen, and McKernan filed a class action lawsuit against Stability AI, Midjourney, DeviantArt, and Runway AI. The complaint alleged that the companies had "stole their original work to train their AI models and generate remarkably similar and competing art." The case would take years to resolve, but it established a principle: the artists would fight back.


Ortiz became a leading voice in the resistance.

In July 2023, she testified before the United States Senate, describing what had happened and what it meant. "I worked with Disney for eight years or so," she explained. "Disney owns so much of my work and there's nothing stopping them from generating a Karla Ortiz model internally and never having to hire me again. That's a possibility that can happen and we need worker protections against that."

Her position was nuanced. She did not oppose AI technology itself. "AI needs to be fair, and ethical for everybody—and not only for the companies that make AI products," she argued. "AI needs to be fair to the customers who use these products, and also for creative people like me who make the raw material that these AI materials depend upon."

The Concept Art Association, an advocacy organization for artists in entertainment, began organizing. They raised funds to hire a full-time lobbyist in Washington. They built coalitions with other creative unions. They discovered that their community—often solitary, often competitive—could act collectively when survival demanded it.

In January 2025, San Diego Comic-Con announced it would ban AI-generated art from its 2026 art show. The decision came after swift protests from artists who refused to exhibit alongside machines trained on their work. Ortiz celebrated: "We did this together!"

The victories were small, but they established that resistance was possible. The platforms moved fast; the artists organized faster.


The legal reckoning gathered momentum.

Getty Images, the massive stock photography company, filed suit against Stability AI in February 2023. The complaint alleged infringement of more than 12 million photographs, along with their captions and metadata. Getty's images had been scraped without license, used to train a system that would compete directly with Getty's business. Generated images sometimes even reproduced Getty's watermark—the machine having learned, absurdly, to imitate the mark that was supposed to prevent unauthorized use.

The case proceeded slowly. By late 2024, Stability AI was reportedly refusing to participate in discovery, a sign, perhaps, of a company struggling to survive its legal exposure. A parallel case in London promised to "shape the future relationship between AI and creative work."

In August 2024, U.S. District Judge William Orrick issued a ruling that gave the artists hope. He found their theory that "image-diffusion models like Stable Diffusion contain compressed copies of their datasets" to be plausible. He found it plausible that "training, distributing, and copying such models constitute acts of copyright infringement." The case would proceed. Claims for direct copyright infringement, trademark, trade dress, and inducement were allowed to go forward.

The ruling did not resolve the central legal question: Is AI training copying? When a neural network learns from an image, does it make a copy in any legally meaningful sense? Or does it transform the image into something fundamentally different: mathematical weights, statistical patterns, abstract representations that no longer constitute the original work?

The question remained open. But the courts were taking it seriously.

Then, in mid-2025, the largest players entered the arena. Disney and NBCUniversal filed suit against Midjourney, alleging that the platform had used trademarked characters—Elsa, Minions, Darth Vader, Homer Simpson—to train its models. At the time of this writing, the case was newly filed; its outcome remained uncertain. But the companies whose intellectual property defined global entertainment were now in the fight. Whatever ambiguity had protected the AI platforms was about to be tested against the most aggressive copyright enforcers in history.


The platforms, meanwhile, faced their own reckonings.

Stability AI, which had led the open-source charge, struggled financially; Multiple rounds of layoffs followed. CEO Emad Mostaque resigned in April 2024. The company had raised hundreds of millions in venture capital, achieved valuations in the billions, and now found itself mired in litigation with uncertain prospects.

Midjourney had built differently: small team, subscription revenue, no outside funding for years. Its estimated annual revenue exceeded $200 million. But the Disney lawsuit threatened everything. A judgment against the company could be existential.

The economics of image generation were built on a contradiction. The raw material (millions of copyrighted images scraped from the internet) had been acquired for free. The revenue came from selling access to what those images enabled. If the courts ruled that the training itself was infringement, the entire business model collapsed.


The global responses varied.

The European Union, through its AI Act, mandated transparency. General-purpose AI models would face requirements to document their training data, verify copyright compliance, and clearly label AI-generated content. The implementation would phase in through 2027, but the direction was clear: Europe would require accountability that American law did not yet demand.

China took a different approach. AI-generated content would require mandatory labeling, effective September 2025. The regulatory framework focused less on copyright (Chinese law operated differently) and more on social control, ensuring that generated content could be identified and, if necessary, censored.

Japan presented a paradox. Its copyright law was historically permissive for AI training, and the government had initially signaled that such training did not infringe. But creative industry pushback was intense. Manga and anime, cultural exports that defined Japan's global brand, faced the same threats as Western art. Artists organized. Policy reconsidered. The permissive stance was being walked back.

In South Korea, the K-pop industry confronted AI-generated content that threatened its carefully constructed images. Webtoon artists organized against the platforms. In India, a large creative workforce (designers, illustrators, concept artists) watched developments in American courts, knowing the outcomes would shape their livelihoods. In Latin America, artists organized in Mexico and Argentina, aware of the colonial dynamics at play: AI trained primarily on Western data, deployed globally, with profits flowing to Silicon Valley.

The AI companies operated across borders. The regulations did not. The asymmetry created spaces where harm could accumulate before response could catch up.


The story of the image generators was not finished. As of this writing, the major lawsuits remained unresolved. The platforms continued operating. Artists continued creating—and continued fighting.

What had happened was a kind of enclosure. The commons of human creativity (images shared online, posted on social media, displayed in portfolios) had been harvested to train machines that would compete with the creators themselves. The harvesting was legal in the sense that no law explicitly prohibited it. It was illegal, the artists argued, because the uses that followed violated copyrights that already existed.

The courts would decide. The outcome would shape not just AI, but the meaning of creative work in an age when machines could approximate human expression with unsettling precision.

Karla Ortiz and the artists who organized with her had asked for three things: credit, consent, and compensation. They had not received them. But they had established that the fight would continue, in courts, in legislatures, in the collective action of workers who refused to accept that their labor could be taken without consequence.

The diffusion models could generate images in seconds. The reckoning would take years. The artists, who had learned patience through the long work of developing their craft, were prepared to wait.