The quick rise of AI tools for music creation has sparked numerous legal battles between these emerging tech companies and the established music industry. Three major lawsuits filed this year (or soon to be filed) could reshape the landscape for both AI development and music copyright law.
A special thanks to Kevin Casini for reviewing the legal details in today’s post.
Case 1: Record labels vs. Suno and Udio
In June 2024, Universal Music Group, Sony Music Entertainment, Warner Music Group and other major record labels filed lawsuits against AI music generation companies Suno and Udio. The labels allege these companies engaged in "systematic and widespread infringement" of copyrighted sound recordings to train their AI models.
Key allegations
Suno and Udio copied "decades worth of the world's most popular sound recordings" without permission to train their AI models
The AI models can generate outputs that labels can prove closely resemble specific copyrighted recordings
This infringes on labels' reproduction, distribution, and public display rights
The companies profit from this infringement through paid subscriptions and market cap
The lawsuits cite specific examples of AI-generated songs that closely mimic hits like "My Girl" by The Temptations, "American Idiot" by Green Day, and "All I Want for Christmas is You" by Mariah Carey.
Suno's response: “It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process,” and “The outputs generated by Suno are new sounds, informed precisely by the ‘styles, arrangements and tones’ of previous ones. They are per se lawful.”
Udio's response: The company is "completely uninterested in reproducing content in our training set" and has implemented filters to prevent reproducing copyrighted works.
Case 2: Music publishers vs. Anthropic
In October 2023, major music publishers including Universal Music Publishing Group, Sony Music Publishing, and Warner Chappell Music sued AI company Anthropic over its Claude chatbot. The publishers allege Claude infringes their copyrights by reproducing song lyrics without permission.
Key allegations
Anthropic copied publishers' lyrics to train Claude
Claude can reproduce complete or near-complete song lyrics on request
This violates publishers' reproduction and distribution rights
Anthropic profits from this infringement through API access fees and investments
The lawsuit cites examples of Claude reproducing lyrics to songs like "Roar" by Katy Perry, "I Will Survive" by Gloria Gaynor, and "Friends in Low Places" by Garth Brooks.
Anthropic's response: Anthropic claimed that song lyrics are not in typical outputs, as “… Normal people would not use one of the world’s most powerful and cutting-edge generative AI tools to show them what they could more reliably and quickly access using ubiquitous web browsers.“
Case 3: Criminal charges against Michael Smith
Most recently, this month, the U.S. Department of Justice indicted Michael Smith on charges of wire fraud and money laundering related to an alleged AI-powered streaming fraud scheme.
Key allegations
Smith used AI to generate "hundreds of thousands of songs" since 2017
He then used bots to artificially stream these songs billions of times
This scheme fraudulently earned over $10 million in streaming royalties
Smith worked with AI music company Boomy to create songs for the scheme
The indictment cites emails where Smith discussed needing "a TON of songs fast to make this work around the anti fraud policies."
Boomy's response: The company was "shocked by the details in the recently filed indictment" and that Smith "consistently represented himself as legitimate."
The meat of the matter
These cases touch on several key legal issues that could set important precedents for AI and copyright law:
1. Fair use and transformative works
The AI companies will likely argue their use of copyrighted material for training falls under fair use. They may claim the outputs are transformative works that don't compete with the originals.
Counter-argument: The music industry contends that generating nearly identical copies is not transformative and directly competes with licensed uses of their works.
Potential precedent: If courts side with the AI companies, it could expand fair use protections for training AI on copyrighted works. A ruling for the music industry could restrict AI training data to public domain or explicitly licensed works.
2. Reproductions in machine learning
These cases may force courts to grapple with whether temporary copies made during the machine learning process constitute copyright infringement. This could potentially barrel into a debate on ephemeral recordings in the music industry.
Potential precedent: A ruling that such copies infringe could have wide-ranging impacts on AI development across industries. A finding that they don't could open the door to unrestricted use of copyrighted material for AI training.
3. Output Liability
The cases raise questions about liability for AI-generated outputs that resemble copyrighted works. Are the similarities due to copying in the training data or independent outputs generated by the AI?
Potential precedent: Courts may need to establish new standards for proving infringement by AI systems and determine liability when humans prompt AI to create infringing content. However, a novel approach will open them up to be overruled on an appeal.
4. Streaming Fraud
The criminal case against Michael Smith could impact how streaming platforms detect and prevent artificial inflation of play counts, especially as AI makes bot creation more sophisticated.
Potential precedent: A conviction could spur new regulations around streaming metrics and royalty payments to combat AI-powered fraud.
What’s next?
These lawsuits reflect growing tensions between the music industry and AI companies. Outcomes could reshape several aspects of the business:
Licensing and royalties: If AI companies prevail, it may devalue existing licensing models for lyrics and samples. A music industry win could create new licensing markets for AI training data.
Creation and ownership: Unclear AI copyright status may discourage collaborations between human artists and AI tools. Strict restrictions could limit AI's creative potential in music production.
Marketing and discovery: AI tools could change how fans discover music and interact with artists' catalogs. Legal uncertainty may slow adoption of AI for music recommendation and playlist generation.
Streaming economics: The criminal case highlights vulnerabilities in streaming-based royalty models. Platforms may need to develop new fraud detection methods and payout structures.
Implications for the AI industry
These cases could have far-reaching effects on AI development practices:
Training data: A ruling against the AI companies may force developers to rely more heavily on public domain works or create expensive licensing schemes for training data.
Model architecture: To avoid infringement claims, AI architectures may need to be redesigned to prevent close mimicry of training examples.
Transparency: Legislature may require more disclosure about training data and model capabilities, challenging the "black box" nature of many AI systems.
Regulatory scrutiny: These high-profile cases could accelerate calls for AI-specific regulations, particularly around copyright and content generation.
Things we should be asking
As these cases progress, several key questions remain:
How will courts define the line between inspiration and infringement for AI-generated works?
What level of similarity between AI outputs and copyrighted works constitutes infringement?
How can streaming platforms reliably distinguish between human and AI-generated music?
Will these cases lead to new licensing models for use of copyrighted material in AI training?
How might rulings in these cases impact other creative industries like visual art, literature, and film?
The outcomes of these lawsuits, prosecutions, and future legal battles will likely shape the future of both the music and AI industries for years to come. They highlight the urgent need for legal frameworks to catch up with rapidly advancing AI capabilities.
As these music tools (and other AI tools in general) become more sophisticated at mimicking human creativity, finding the right balance between innovation and protecting artists' rights will be critical. The decisions in these cases may determine whether AI becomes a collaborative tool that enhances human creativity in music or a disruptive force that undermines the industry's economic foundations.