The legal landscape surrounding AI-generated art is shifting faster than the technology itself. In barely two years, the conversation has moved from speculative thought experiments about machine creativity to landmark court rulings, sweeping legislation, and heated multi-billion-dollar lawsuits that will shape intellectual property law for decades to come. For creators — whether you are an artist worried about your work being scraped into training datasets, a designer using AI tools in your commercial workflow, or a hobbyist sharing AI-generated images online — understanding the current state of copyright law as it applies to AI art is no longer optional. It is essential.
The Copyright Question
At the heart of every legal debate about AI-generated art sits a deceptively simple question: can the output of an artificial intelligence system be copyrighted? The answer, as of early 2026, remains frustratingly nuanced. In the United States, the Copyright Office has maintained a consistent position through a series of guidance documents and registration decisions: copyright protection requires human authorship. Works generated entirely by a machine, with no meaningful creative contribution from a human being, are not eligible for copyright registration and therefore fall into the public domain.
This principle was established most clearly in the 2023 ruling on the graphic novel "Zarya of the Dawn," where the Copyright Office granted protection to the text and overall arrangement authored by Kris Kashtanova but denied copyright to the individual Midjourney-generated images within the book. The Office reasoned that because Kashtanova could not predict or control the specific visual output of each generation, the images lacked the requisite human authorship. This ruling became the foundational precedent that subsequent decisions have largely followed.
However, the boundary between "AI-generated" and "AI-assisted" remains deliberately undefined. The Copyright Office has acknowledged that using AI as a tool within a broader creative process — much like using Photoshop, a camera, or any other instrument — does not automatically disqualify a work from protection. The critical factor is whether a human exercised sufficient creative control over the expressive elements of the final work. Significant post-processing, selective curation from hundreds of generations, detailed prompt engineering combined with manual editing, and the use of AI outputs as components within a larger human-directed composition may all support a claim of human authorship. The challenge is that no bright line exists to tell you exactly how much human involvement is enough.
Training Data and Fair Use
If the copyrightability of AI outputs is the philosophical question, the legality of training data is the commercial battleground. The core legal conflict centers on whether using copyrighted images, illustrations, photographs, and other creative works to train generative AI models constitutes fair use under U.S. copyright law — or whether it represents unauthorized reproduction on an unprecedented scale.
The most significant lawsuits in this area are the class-action cases filed against Stability AI, Midjourney, and DeviantArt by a group of visual artists led by Sarah Andersen, Kelly McKernan, and Karla Ortiz. Filed in early 2023, these cases allege that the defendants scraped billions of copyrighted images from the internet without permission or compensation, used those images to train commercial image generation models, and that the resulting models are capable of producing works that compete directly with the original artists in the marketplace. Getty Images filed a parallel suit against Stability AI, alleging that millions of its copyrighted photographs were used in training data without licensing.
The defendants have broadly argued that training an AI model on copyrighted works is transformative fair use — analogous to a human artist studying existing works to learn techniques and styles. They contend that the trained model does not contain copies of the original works, that the outputs are new creative expressions rather than reproductions, and that the process is no different in principle from how search engines index copyrighted web pages or how researchers analyze copyrighted texts for academic purposes.
As of early 2026, no definitive appellate ruling has resolved these arguments. Several district court decisions have allowed claims to proceed, rejecting motions to dismiss and signaling that courts take the plaintiffs' arguments seriously. The outcomes of these cases will likely establish the legal framework for AI training for years to come, and their implications extend far beyond visual art into music, writing, code, and every other domain where generative AI operates.
Key Legal Developments in 2025-2026
The regulatory landscape has evolved significantly in the past year. The European Union's AI Act, which entered phased implementation in 2025, includes specific transparency requirements for generative AI systems. Providers of general-purpose AI models must publish sufficiently detailed summaries of the training data used, comply with EU copyright law including the text and data mining opt-out provisions of the 2019 Copyright Directive, and clearly label AI-generated content in certain contexts. These requirements represent the most comprehensive regulatory framework for generative AI anywhere in the world.
In the United States, executive orders issued in late 2023 and updated in 2025 directed federal agencies to develop guidelines for AI-generated content in government procurement and communications, but stopped short of new legislation addressing copyright directly. Several bills were introduced in Congress during 2025, including proposed amendments to the Copyright Act that would require disclosure of copyrighted works used in AI training and establish a compulsory licensing framework for training data. None had passed into law by early 2026, though bipartisan support for some form of training data transparency suggests legislation may eventually emerge.
Internationally, Japan maintained its notably permissive stance, with its copyright law generally allowing the use of copyrighted works for computational analysis including AI training, provided the use does not unreasonably prejudice the rights holder's interests. China introduced draft regulations requiring AI service providers to respect intellectual property rights in training data, though enforcement mechanisms remain unclear. The global patchwork of approaches creates significant uncertainty for AI companies operating across borders and for creators whose work circulates internationally.
Who Owns What?
Ownership of AI-generated art depends on a complex interplay of factors: the terms of service of the platform used, the degree of human creative input, the jurisdiction, and the specific circumstances of creation. Here is a general breakdown of the most common scenarios as the law currently stands.
- The user/creator: If you use an AI tool as part of a substantially human-directed creative process — providing detailed prompts, curating and selecting from many outputs, performing significant post-editing, or incorporating AI-generated elements into a larger original work — you have the strongest claim to copyright ownership of the final result. Most major platforms also assign usage rights to the user through their terms of service.
- The AI company: Some platforms retain certain rights over outputs generated through their systems. Midjourney's terms, for example, grant the company a broad license to use, reproduce, and display images generated by users. OpenAI's terms for DALL-E assign all rights in the output to the user, subject to content policy compliance. The specific terms vary significantly between platforms and are worth reading carefully.
- Public domain: Works generated entirely by AI with no meaningful human creative contribution are not copyrightable in the United States and most other jurisdictions. This means anyone can freely use, modify, and distribute them. If you generate an image by typing a simple prompt and use the raw output without modification, that image may have no copyright protection at all — which means you cannot prevent others from using it either.
Commercial Use Guidelines
Using AI-generated art commercially requires careful attention to both legal risks and platform-specific terms. As a practical matter, the biggest risk factors are not theoretical copyright questions but concrete issues like recognizable copyrighted characters appearing in outputs, identifiable real people being depicted without consent, and outputs that too closely resemble specific existing copyrighted works.
Midjourney grants commercial usage rights to paid subscribers, though free-tier users receive only a non-commercial license. Companies with more than $1 million in annual revenue are required to subscribe to the Pro or Mega plan. Midjourney retains a license to use all generated images, including for marketing and model improvement, though the paid subscriber retains ownership of their creations.
DALL-E through OpenAI's terms provides users with full rights to their generated outputs, including commercial usage, with no revenue thresholds or plan restrictions. OpenAI explicitly states that users own the images they create and can sell, print, or distribute them freely. However, outputs must comply with OpenAI's content policy, and the company reserves the right to use inputs and outputs for model improvement unless the user opts out through the API.
Adobe Firefly takes a differentiated approach by training exclusively on licensed content — Adobe Stock images, openly licensed works, and content in the public domain. This "commercially safe" training pipeline is designed to minimize the legal risk associated with copyrighted training data. Adobe provides an intellectual property indemnification for enterprise customers using Firefly outputs, effectively promising to cover legal costs if a Firefly output is found to infringe on a third party's copyright. This level of legal protection is unique in the industry and makes Firefly particularly attractive for risk-averse commercial applications.
Protecting Your Work as an Artist
For traditional artists and photographers concerned about their work being used to train AI models without permission, several protective mechanisms have emerged, though none is yet comprehensive.
- Glaze and Nightshade: Developed by researchers at the University of Chicago, Glaze applies subtle perturbations to images that disrupt style mimicry by AI models, making it harder for the model to learn and replicate the artist's distinctive style. Nightshade goes further by introducing adversarial perturbations that can cause unpredictable behavior in models trained on the altered images. Both tools have been widely adopted by the artist community, though their long-term effectiveness against evolving training techniques remains debated.
- Do Not Train registries: Organizations including Spawning AI have developed opt-out registries where artists can declare that their work should not be used for AI training. The "ai.txt" protocol, modeled after "robots.txt," allows website operators to signal that their content should be excluded from AI training crawls. Several major AI companies have committed to respecting these opt-out signals, though compliance is voluntary and enforcement is difficult to verify.
- Contractual protections: Many artists are now adding explicit AI training exclusions to their licensing agreements and terms of service. Stock photography platforms including Getty Images and Shutterstock have updated their contributor agreements to address AI training separately from standard licensing. Some freelance illustrators and photographers include "no AI training" clauses in their contracts with clients, though the enforceability of such provisions against third parties who may subsequently scrape the work remains uncertain.
Best Practices for AI Art Creators
Whether or not the law eventually settles in favor of broad copyright protection for AI-generated works, establishing responsible practices now will protect you and strengthen your position regardless of how the legal landscape evolves.
- Document your process: Keep records of your prompts, iterations, selection criteria, and post-processing steps. If you ever need to demonstrate human authorship, a detailed creative log showing the decisions you made throughout the process will be far more persuasive than a single raw output.
- Disclose AI involvement: Transparency about AI usage builds trust with clients, audiences, and collaborators. Many platforms and marketplaces now require disclosure of AI-generated or AI-assisted content, and the trend toward mandatory labeling is accelerating. Proactive disclosure protects you from accusations of deception and positions you as a responsible practitioner.
- Mix AI with human creation: The strongest copyright claims attach to works where AI-generated elements are integrated into a broader human-created composition. Using AI outputs as starting points, reference material, or components within a collage or composite that involves substantial human creative decisions strengthens both the legal and ethical foundation of your work.
- Provide attribution where appropriate: While there is no legal requirement to credit an AI tool, doing so is increasingly considered good practice within creative communities. Noting the tool used and the extent of AI involvement demonstrates integrity and helps establish industry norms around responsible disclosure.
The Licensing Landscape
The traditional licensing frameworks that govern creative works are adapting — sometimes awkwardly — to the realities of AI-generated content. Creative Commons, the most widely used open licensing system, has grappled with how its licenses apply when one or both parties in the licensing relationship is a machine. CC licenses require an underlying copyright to function; if an AI-generated work is not copyrightable, applying a CC license to it is arguably meaningless, since there are no rights to license.
In response, several new licensing frameworks have emerged specifically for AI-generated and AI-assisted content. The Responsible AI Licenses (RAIL) framework, originally developed for model weights, has been extended to cover model outputs. Some creators have adopted bespoke licenses that specify the degree of AI involvement and the terms under which the work can be used, modified, or used as training data for other models.
Stock photography platforms are navigating this transition with varying approaches. Shutterstock was among the first to accept AI-generated images into its library, launching a dedicated contributor fund to compensate artists whose work was used in training. Adobe Stock accepts Firefly-generated content but requires AI-generated images to be clearly labeled and restricts them to the editorial category. Getty Images banned AI-generated content entirely from its platform, doubling down on its commitment to human-created imagery and its ongoing litigation against Stability AI. These divergent strategies reflect genuine uncertainty about how the market will value AI-generated content relative to human-created work.
Looking Ahead
The legal framework surrounding AI-generated art is far from settled, but several trends point toward the likely shape of things to come. First, transparency requirements are converging globally. Whether through legislation like the EU AI Act, industry self-regulation, or marketplace requirements, the expectation that AI-generated content will be clearly identified is becoming universal. Content provenance standards like C2PA, which embed verifiable metadata about how an image was created, are gaining adoption among both AI platforms and traditional camera manufacturers.
Second, some form of training data compensation is likely to emerge, though its structure remains uncertain. Whether through compulsory licensing, collective bargaining agreements modeled on music royalties, or voluntary industry frameworks, the current status quo — where training data is used without compensation — faces too much legal and political pressure to persist indefinitely. The question is whether compensation will be structured in a way that is practical to implement at the scale of modern AI training.
Third, the distinction between AI-generated and AI-assisted work will continue to blur as creative tools become more deeply integrated with AI capabilities. When every version of Photoshop, Procreate, and Blender includes generative AI features, the concept of "AI-free" art will become increasingly difficult to define. The legal framework will need to evolve toward a spectrum of human involvement rather than a binary classification.
For creators working today, the most pragmatic approach is to stay informed, document your process, be transparent about your methods, and pay close attention to the terms of the platforms you use. The law will eventually catch up to the technology. In the meantime, building habits of responsible creation is the best protection any artist — human or human-plus-AI — can have.