On March 2, 2026, the U.S. Supreme Court declined to hear Thaler v. Perlmutter, effectively cementing the legal principle that artificial intelligence cannot be recognized as an author under U.S. copyright law. The decision, while widely anticipated, closes the door on one of the most watched intellectual property cases of the decade. But if anyone thinks the legal questions around AI-generated art are settled, they are not paying attention. Multiple high-stakes lawsuits — including Disney v. Midjourney and the landmark Andersen v. Stability AI class action — are barreling toward trial dates that could reshape the legal and commercial foundations of every major image generation platform.

The Thaler Decision: What the Court Actually Said

The case began when Dr. Stephen Thaler sought to register a copyright for an artwork titled "A Recent Entrance to Paradise," listing his AI system — the "Creativity Machine" — as the sole author. The U.S. Copyright Office rejected the application, and Thaler challenged that rejection through the federal courts. The D.C. Circuit Court upheld the Copyright Office's decision, with Circuit Judge Patricia A. Millett writing that "the Creativity Machine cannot be the recognized author of a copyrighted work because the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being."

The Supreme Court's denial of certiorari means this ruling stands as binding precedent in the D.C. Circuit and strong persuasive authority across the country. Works created solely by AI, with no human creative contribution, cannot receive copyright protection under current U.S. law. They enter the public domain the moment they are created, meaning anyone can use, modify, and redistribute them without permission.

The Critical Gray Area: AI-Assisted vs. AI-Generated

While the Thaler ruling is clear about purely AI-generated works, it deliberately leaves open the far more commercially significant question: what happens when a human uses AI as a tool within a broader creative process? The Copyright Office has consistently acknowledged that using AI as an instrument — much like a camera or Photoshop — does not automatically disqualify a work from protection. The critical factor is whether a human exercised "sufficient creative control" over the expressive elements of the final work.

This distinction creates an enormous gray area that the courts have not yet fully explored. A digital artist who generates hundreds of images, carefully curates a selection, composites elements from multiple outputs, and performs extensive manual retouching has a much stronger copyright claim than someone who types a single prompt and uses the raw output. But where exactly the line falls between these extremes remains legally undefined — and that uncertainty has real commercial consequences for every creator and business using AI image generation tools.

Andersen v. Stability AI: The Training Data Reckoning

If Thaler answered who can own AI art, Andersen v. Stability AI will answer whether AI companies had the right to create these tools in the first place. Filed as a class action by artists Sarah Andersen, Kelly McKernan, and Karla Ortiz, the case alleges that Stability AI, Midjourney, and DeviantArt scraped approximately 5 billion copyrighted images from the internet via the LAION dataset — without permission, without compensation, and without any attempt to obtain licenses — and used those images to train commercial image generation models that directly compete with the original artists.

The case is now set for trial on September 8, 2026, after years of procedural battles, discovery disputes, and motions to dismiss. The defendants have broadly argued that training an AI model on copyrighted works constitutes fair use — a transformative act analogous to a human artist studying existing works to develop skills and style. The plaintiffs counter that this is copying at an industrial scale, that the resulting models can produce works that compete directly with the originals in the marketplace, and that calling it "fair use" would eviscerate copyright protection for visual artists.

The outcome of this trial will have seismic implications. A ruling for the plaintiffs could require AI companies to license their training data — fundamentally changing the economics of model development. A ruling for the defendants could establish a broad precedent that AI training is fair use, effectively immunizing the current practice of scraping the internet for training data.

Disney v. Midjourney: When Hollywood Enters the Ring

If Andersen represents the voice of independent artists, Disney v. Midjourney brings the full weight of corporate intellectual property enforcement to the AI image generation debate. Filed in the Central District of California, the suit alleges that Midjourney unlawfully copied Disney's copyrighted characters, scenes, and visual styles to train its image generation service, and that the platform continues to produce derivative images of iconic characters — from Mickey Mouse to Marvel superheroes — without authorization.

What makes Disney's approach particularly interesting is its dual strategy. Even as it sues Midjourney, Disney has simultaneously signed a licensing deal with OpenAI in 2026, creating an authorized pathway for AI generation of Disney-related content. This suggests that the real fight is not over whether AI can generate images of copyrighted characters — Disney seems to accept that this is inevitable — but over who profits from it and under what terms. The message to the industry is clear: license our content, or face litigation.

The Discovery Battle: 108 Million ChatGPT Logs

In parallel with these image-focused cases, the consolidated copyright litigation against OpenAI in the Southern District of New York has produced its own dramatic developments. The court ordered the production of 108 million ChatGPT output logs as part of discovery — an unprecedented data trove that could reveal exactly how often AI models reproduce or closely paraphrase copyrighted material. While this case centers on text rather than images, its findings will inevitably influence how courts evaluate similar claims in the image generation context.

Courts across the country are reaching divergent conclusions on identical legal questions, particularly regarding fair use and market harm in AI training contexts. This circuit split makes it increasingly likely that the Supreme Court will eventually need to take up one of these training data cases — a prospect that could define intellectual property law for a generation.

What Creators Should Know Right Now

The legal uncertainty creates practical challenges for everyone in the AI image generation ecosystem. Here is what the current state of the law means for different groups.

  • AI art creators: Works with significant human creative input remain your best bet for copyright protection. Document your creative process — prompts, iterations, selections, and edits — to demonstrate human authorship if ever challenged. Raw, unmodified AI outputs remain legally unprotectable.
  • Traditional artists: The Andersen trial in September 2026 will be the most consequential legal event for artists concerned about training data scraping. Tools like Glaze and Nightshade offer some protection, and "Do Not Train" registries provide a mechanism for opting out, though enforcement remains voluntary.
  • Businesses using AI-generated images: The safest commercial approach is to use platforms with clear IP indemnification (like Adobe Firefly) or to ensure significant human creative involvement in any AI-assisted output used commercially. The legal risk of relying on purely AI-generated content for commercial purposes remains elevated.
  • AI companies: The dual strategy emerging from cases like Disney v. Midjourney suggests that proactive licensing programs may be more sustainable than relying on fair use defenses. Companies that build licensing infrastructure now may find themselves better positioned regardless of how the courts rule.

What Comes Next

The next twelve months will be among the most consequential in the history of AI and intellectual property law. The Andersen v. Stability AI trial in September 2026 will produce the first jury verdict on AI training data — a decision that will either validate or upend the legal foundation on which the entire industry has been built. The EU AI Act's Article 50 transparency requirements take effect in August 2026, requiring AI-generated content to be marked in machine-readable formats. And in the U.S., the DEFIANCE Act — passed unanimously by the Senate in January 2026 — targets non-consensual AI-generated deepfakes with federal penalties.

For creators, the message is clear: the technology has outpaced the law, but the law is catching up fast. The decisions made in courtrooms over the next year will determine not just who can profit from AI-generated art, but what obligations AI companies owe to the millions of artists whose work made that art possible in the first place.

References