Can Generative AI copyright and personality rights in media?

Navigating the New Frontier: Generative AI, Copyright, and Personality Rights in Media

Generative AI is rapidly transforming the media industry. It creates everything from articles and music to lifelike digital actors. This technology offers incredible creative possibilities. However, it also brings complex legal questions to the forefront. As a result, the conversation around Generative AI copyright and personality rights in media has become more critical than ever before.

This new wave of innovation challenges our traditional understanding of ownership and identity. For instance, how do we handle copyright when an AI is trained on vast amounts of protected works without permission? Furthermore, what happens when an AI generates content that perfectly mimics a celebrity’s voice or likeness? These are not hypothetical scenarios anymore; they represent active legal battles shaping the future of creative industries.

The outcomes of these disputes are establishing new standards for the entire media landscape. Consequently, courts, creators, and technology companies are all grappling with these significant issues. This article explores the ongoing legal and regulatory battles. We will delve into key areas like fair use, emerging licensing frameworks, and the evolving concept of personality rights in our increasingly digital age. The goal is to understand the complex interplay between technology, law, and creativity.

An abstract image representing the intersection of generative AI, copyright, and personality rights, with symbols for each concept connected in a modern, technological design.

The Training Data Dilemma: Copyright’s First Hurdle

The most fundamental copyright issue with generative AI stems from how these models are trained. Companies like OpenAI, Google, and Meta build their systems by feeding them enormous datasets, often scraped from the public internet. This data frequently includes billions of images, texts, and sounds protected by copyright, ingested without the creators’ permission. As a result, this practice has led to significant legal challenges from artists, authors, and media companies. A prominent example is the lawsuit filed by Getty Images against Stability AI, accusing the company of using millions of its licensed images without authorization to train its image generation model.

The core legal arguments revolve around several key points:

  • Unauthorized Reproduction: The simple act of copying a copyrighted work into a training dataset can be seen as infringement. Creators argue this is a direct violation of their exclusive rights.
  • Creation of Derivative Works: Many legal experts contend that AI-generated outputs, which learn from and mimic the styles of original works, should be classified as derivative works. Under copyright law, only the original copyright holder has the right to create or authorize such works.
  • Lack of Compensation: The current model allows tech companies to build commercially valuable systems using creative content without compensating the original artists and authors. This practice disrupts the economic foundation of creative industries.

Fair Use Under Fire: A Strained Doctrine in the Age of AI

In response to infringement claims, AI developers often rely on the doctrine of “fair use.” This legal principle permits the limited use of copyrighted material without permission for purposes such as criticism, commentary, news reporting, and research. The central pillar of a fair use defense is whether the new work is “transformative,” meaning it adds a new expression or meaning and does not simply substitute for the original. You can find more information on the U.S. Copyright Office website about this here.

However, applying this doctrine to generative AI is highly contentious. AI companies argue that training models is transformative because the AI learns underlying patterns rather than storing and reproducing copies. They claim the purpose is to create a new system, not to republish the original works. On the other hand, creators and media organizations argue that the AI-generated outputs often compete directly with the original works in the market. For example, if an AI can generate an image in the style of a specific photographer, it could reduce demand for that photographer’s licensed work, creating a clear market substitution risk. Courts are now tasked with weighing these arguments, and their decisions will profoundly shape the future of Generative AI copyright and personality rights in media. This includes the ongoing legal battle between Getty Images and Stability AI, which highlights the global nature of this dispute, with legal action taking place in both the U.S. and the U.K. here.

Beyond Copyright: Protecting Human Identity and Personality Rights

While copyright law addresses the use of creative works, a separate legal domain protects the use of an individual’s identity. This area, known as personality rights or the “right of publicity,” is becoming a central battleground in the era of generative AI. These rights govern the commercial use of a person’s name, image, voice, and other unique characteristics. As AI technology makes it easy to create synthetic replicas of real people, the media industry faces urgent questions about consent, compensation, and authenticity.

The Challenge of Synthetic Likeness and Voice Cloning

Generative AI can now produce highly convincing “deepfakes” and voice clones that are nearly indistinguishable from the real person. This capability creates significant risks for public figures and artists whose identities are their brands. For example, an AI could be used to:

  • Create a synthetic version of an actor for a film or advertisement without their permission.
  • Generate a song that perfectly mimics a famous singer’s voice, potentially deceiving listeners and devaluing the artist’s work.
  • Produce fake endorsements or statements, leading to reputational damage and misinformation.

These actions directly implicate an individual’s right to control how their likeness is used, particularly for commercial gain. In many legal systems, including Austria’s General Civil Code (ABGB), the unauthorized use of a person’s image is unlawful. Similarly, in the United States, the right of publicity is protected through a patchwork of state laws that prevent the appropriation of a person’s identity for commercial advantage without consent.

High-Profile Clashes and Industry Pushback

The potential for misuse is not just theoretical. Several high-profile cases have already emerged, forcing a confrontation between tech developers and rights holders. A notable recent example involved actress Scarlett Johansson, who stated that OpenAI’s new AI assistant voice was “eerily similar” to her own, despite her having previously declined an offer to voice the system. The incident sparked a public debate over the ethics and legality of replicating a person’s distinct vocal identity, leading OpenAI to pause the use of the voice.

Similarly, the music industry has taken a strong stance. Universal Music Group has actively worked to remove AI-generated songs that mimic the voices of its artists, such as a track that replicated the styles of Drake and The Weeknd. The company argued that such creations violate both copyright and personality rights, setting a clear precedent that artists’ voices are a core part of their identity and brand, deserving of legal protection. These events underscore the growing need for clear regulations and ethical guidelines to govern the use of personal likeness in AI-generated media.

Global Legal Perspectives: A Comparative Overview

The legal landscape surrounding generative AI is evolving at different paces and with different priorities across the globe. Understanding these regional nuances is crucial for media professionals, creators, and technology developers. The following table provides a high-level comparison of the legal frameworks in the United States, the European Union, and Austria.

Legal Aspect United States (USA) European Union (EU) Austria
AI Authorship & Ownership Requires human authorship. The U.S. Copyright Office has consistently refused to grant copyrights to works created solely by AI. Ownership is tied to the human creator’s input. For more information, see the official guidance at copyright.gov. No copyright for purely AI-generated works. Protection requires an “author’s own intellectual creation,” implying human involvement. The legal status of works with significant human-AI collaboration is still being defined. Follows the EU standard. Austrian copyright law protects “personal intellectual creations,” meaning a human author is necessary. AI-generated content without sufficient human input is not copyright protected.
Personality & Publicity Rights No federal law. A complex patchwork of state laws protects against the unauthorized commercial use of a person’s likeness (right of publicity). Protection levels vary significantly from state to state. Strongly protected under GDPR, which governs the use of personal data, including images and biometric voice data. The upcoming AI Act will add further safeguards against unauthorized synthetic media. More on this can be found at the European Parliament’s website. Robust protection under § 16 of the General Civil Code (ABGB), which explicitly protects the “right to one’s own image.” Consent is required for the use of a person’s likeness.
Rules on Training Data Heavily reliant on the “fair use” doctrine, which is flexible but legally uncertain. Courts are currently deciding whether training AI models on copyrighted data is transformative enough to qualify. Multiple high-profile lawsuits are ongoing. Regulated by the Copyright in the Digital Single Market (CDSM) Directive. It provides a specific exception for Text and Data Mining (TDM), but importantly, allows rightsholders to expressly “opt-out” of having their works used for commercial AI training. Implements the EU’s CDSM Directive, including the TDM exception and the opt-out mechanism. This provides a clearer, though more restrictive, framework than the U.S. fair use approach.
Liability for Infringement Generally falls on the end-user who generates the infringing content or the company providing the service, depending on the terms of service and level of control. This is an area of active litigation. The upcoming AI Act will establish a tiered liability framework. Liability can be assigned to the provider of the AI system, the developer, or the user, depending on their role and the nature of the infringement. Follows general EU principles. Typically, the person or entity that uses the AI to create and publish infringing content is held liable. The liability of AI providers is a developing area of law.

The Path Forward: Balancing Innovation and Protection

The rise of generative AI has undeniably created a paradigm shift in the media industry, bringing both unprecedented creative tools and profound legal challenges. As we have explored, the core conflict centers on adapting long-standing principles of copyright and personality rights to a technology that operates on a scale and in a manner that was previously unimaginable. The debates surrounding training data, the fair use doctrine, and the unauthorized use of personal likenesses are not merely academic; they are actively shaping the future of creative ownership and personal identity.

The legal landscape is in a state of dynamic flux. Courts are grappling with landmark cases, and policymakers are working to establish new regulatory frameworks, such as the EU’s AI Act. Because of this, the clear rules of the road for Generative AI copyright and personality rights in media are still being written. What is certain is that the solution will require a delicate balance. It must be one that protects the rights and livelihoods of creators and individuals without stifling the technological innovation that promises to enrich our media ecosystem.

Moving forward, collaboration between technology developers, content creators, legal experts, and regulators will be essential. Establishing clear ethical guidelines, transparent data practices, and fair licensing models is the most viable path to responsible AI adoption. For everyone involved in the media industry, from artists and journalists to executives and lawyers, staying informed and adaptable is no longer optional. It is a necessity for navigating this new and exciting technological frontier responsibly.

Frequently Asked Questions

Can I copyright content I create using generative AI?

Generally, you cannot copyright work that is purely generated by AI. Legal frameworks, particularly in the United States and the European Union, require a work to have significant human authorship to be eligible for copyright protection. The U.S. Copyright Office has stated that the key question is whether the work is a “product of human authorship.” If you use AI as a tool and have substantial creative control over the final output—through specific prompts, selecting, arranging, and modifying the content—you may be able to claim copyright over your contribution. However, if you simply provide a basic prompt and the AI generates the entire work, it will likely be considered to be in the public domain.

Is it legal for AI companies to use my copyrighted images or texts for training data?

This is one of the most contentious legal questions right now, and the answer varies by region. In the United States, AI companies argue that using public data for training constitutes “fair use,” a legal doctrine that permits the limited use of copyrighted material for transformative purposes. However, numerous lawsuits are challenging this claim, and courts have not yet made a final ruling. In the European Union, the law is more structured. The Copyright in the Digital Single Market (CDSM) Directive provides an exception for Text and Data Mining (TDM), but it also gives rightsholders the ability to expressly “opt-out” of having their work used for commercial AI training.

What can I do if an AI generates content using my voice or likeness without permission?

This issue falls under personality rights, or the “right of publicity.” If an AI-generated work uses your image, voice, or other personal characteristics for commercial purposes without your consent, you likely have legal recourse. In the U.S., these rights are protected by a variety of state laws. In Austria and the wider EU, personality rights are strongly protected under civil codes and data privacy laws like GDPR. The first step is typically to send a cease-and-desist letter to the platform or user hosting the content. If that fails, legal action may be necessary to claim damages and stop the unauthorized use.

Who is legally responsible if AI-generated content is defamatory or infringes on copyright?

Determining liability is complex and is still a developing area of law. Generally, liability could fall on several parties. The end-user who created and published the infringing or defamatory content is often a primary target. However, the company that developed the AI model could also be held liable, especially if they were found to have encouraged or facilitated the infringement. The EU’s upcoming AI Act aims to create a clearer liability framework, assigning responsibility based on the roles of the provider, developer, and user of the AI system.

As a media creator, how can I protect my work from being used for AI training?

While no method is foolproof, there are several steps creators can take. In the EU, you can use a machine-readable rights reservation protocol (like placing a specific notice in your website’s terms of service or metadata) to opt out of the TDM exception. Some creators are using digital watermarking to embed ownership information in their work, which can help prove infringement later. Additionally, industry groups and registries are emerging to help creators manage their rights and collectively license their work for AI training, offering a potential path for compensation and control in the future.

The information provided here constitutes general and non-binding legal information that makes no claim to be current, complete, or accurate. All non-binding information is provided exclusively as a public and free service and does not establish a client-attorney or consulting relationship. For further information or specific legal advice, please contact our law firm directly. We therefore assume no guarantee for the topicality, completeness, and correctness of the provided pages and content.

Any liability claims relating to damages of a non-material or material nature caused by the publication, use, or non-use of the information presented, or by the publication or use of incorrect or incomplete information, are fundamentally excluded, provided there is no demonstrable willful intent or grossly negligent conduct.

For additional information and contact, please refer to our Legal Notice (Impressum) and Privacy Policy.

Scroll to Top