In April 2024, we published a summary of the then current state of artificial intelligence (“AI”)-related copyright litigation, which you can find and read here. Since that publication, new theories for complaints and defenses have emerged in this space. As 2024 draws to a close, we provide a brief update on such litigation.
More than 151 notable suits are pending across the country in which copyright owners are pursuing various theories of infringement against AI platforms, alleging that AI models either infringe their copyrights by training using copyrighted works,2 because the output of the AI models itself infringes,3 or both.4 The U.S. Copyright Office, which had in late 2023 issued a Notice of Inquiry seeking comments about the collection and curation of AI dataset sources and how those datasets are used to train AI models,5 has since published Part 16 of its report on copyright and AI, which addresses “digital replicas” — that is, audio and visual content created using AI that “realistically but falsely” depicts an individual (also known as “deepfakes”).7 Additional parts of the report are expected in the near future.
This article provides updates on the most prominent ongoing AI copyright cases and their legal theories, as well as recent developments from the U.S. Copyright Office.
Plaintiffs’ Copyright Infringement Theories
Theories regarding improper copyright information management under the DMCA have run into Article III standing issues.
Most plaintiffs in the cases cited have asserted direct infringement claims alleging that each respective AI company in the case accessed copyrighted material and made copies for the purposes of training a given AI model. Other plaintiffs, such as GitHub, Getty Images and Raw Story, have recently relied on a more novel theory of improper copyright information management under the Digital Millennium Copyright Act (“DMCA”).8 Specifically, such plaintiffs have alleged that the defendants removed copyright management information (including the copyright notice, title, identifying information, terms of use, and identifying symbols or numbers) from the copyrighted works, or caused their AI products to omit this information from outputs.9 In November 2024, however, the U.S. District Court for the Southern District of New York ruled that such information management theories do not give rise to Article III standing, which requires a concrete injury even in the context of a statutory violation in order for a party to bring suit, and dismissed a case relying on such a theory without prejudice. The court ruled that “[the judge is] skeptical about Plaintiffs' ability to allege a cognizable injury but prepared to consider an amended pleading.”10 Nonetheless, some plaintiffs allege injury separate from improper information management,11 namely, direct copyright infringement. However, these cases have not yet been resolved.
Theories Regarding Post-Training, Pre-Output Infringement and Substantially Similar Outputs are Still Pending.
During the period between training a given model and the model producing an output, some plaintiffs argue, an AI model contains compressed copies of the works through the usage of weight folders, and that this unauthorized copying should be considered direct infringement even if the whole of the work is not represented in a traditional way.12 Plaintiffs have also argued that AI outputs can result in infringement by producing substantially similar works.13 Such theories have not yet been tested as the disputes are still pending, with parties recently disputing orders about discovery.14
Defenses
Since our last publication in April 2024, some defendants have provided answers to complaints. As to plaintiffs’ claims about “compressed copies,” defendants OpenAI and Anthropic PBD claim that their use of copyrighted materials is transformational fair use because it adds new elements to the works and creates new, transformative outputs.15 To support fair use arguments, some defendants, including OpenAI, Microsoft, Bloomberg, and GitHub, have asserted that their use of copyrighted materials is permissible because the AI model outputs merely build upon copyrighted works, rather than replicating protected expressions.16 Several defendants have asserted that any copying that may have occurred should be considered de minimis, not rising to the level of infringement.17 Notably, OpenAI has argued that some of the material used in its platform is not protected by copyright—such as public domain materials, “scenes a faire” (expressions that are common to a genre, which are not copyrightable), or uncopyrightable ideas rather than protected expressions.18
Future Comment by Copyright Office
The Copyright Office is still pursuing its investigation into copyright law and policy issues raised by AI and has published the first part of its report on copyright and AI.19 However, the published report is focused on “digital replicas” and the Copyright Office’s thoughts on new legislation creating a licensable right to digital replicas of one’s likeness.20 It does not contain information about a possible compulsory licensing scheme in which AI platforms may have to participate. Further information about such a regulatory scheme is expected in future parts of the Copyright Office’s report.
Risk Allocation by Contract
Because copyright infringement liability in relation to AI is still uncertain, parties should carefully consider how any given contract relating to AI allocates liability for potential copyright infringement. As we previously noted, some AI vendors offer enterprise and developer customers limited indemnity protections that are often delineated in terms of use of specific AI products.21 While vendors will generally indemnify users against third-party infringement claims related to outputs, some will not indemnify users for claims that the training data and inputs were infringing.22 Therefore, parties should review the scope of indemnities when choosing AI services vendors and pay careful attention to what such indemnities leave out, keeping in mind that some contractual provisions in social media companies’ and other content providers’ terms of service attempting to limit the use of third-party data may be preempted by the Copyright Act,23 and the result of such preemption relative to ultimate liability is uncertain.
Conclusion
As the calendar turns to 2025, we are continuing to witness a tumultuous period of litigation and policymaking that will determine the relationship between AI platforms and copyright infringement liability. There likely will continue to be uncertainty about the risk of copyright liability arising from the development and use of AI platforms. It remains to be seen what the Copyright Office will recommend to Congress and how the cases discussed above will be resolved, as well as who will bear the potential burdens of infringement liability.
- See Thomson Reuters Enter. Ctr. GmbH v. ROSS Intel. Inc., No. 1:20-cv-00613-SB (D. Del. filed May 6, 2020); UAB Planner 5D v. Facebook, Inc., 534 F. Supp. 3d 1126 (N.D. Cal. 2021); Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal. filed Nov. 3, 2022); Getty Images, Inc. v. Stability AI, Inc., No. 1:23-cv-00135-JLH (D. Del. filed Feb. 3, 2023); Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. filed June 28, 2023); In re Google Generative AI Copyright Litigation, No. 5:23-cv-03440 (N.D. Cal. filed July 11, 2023); Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023); Kadrey v. Meta Platforms, Inc., No. 23-cv-03417-VC (N.D. Cal. Nov. 20, 2023); Huckabee v. Bloomberg L.P., No. 1:23-cv-09152 (S.D.N.Y. filed Oct 17, 2023); Concord Music Grp., Inc. v. Anthropic PBC, No. 3:23-cv-01092 (M.D. Tenn. filed Oct. 18, 2023); Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Second amended complaint filed Oct. 31, 2024); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023); Nazemian et al. v. Nvidia Corp., No. 24-01454 (N.D. Cal. filed Mar. 8, 2024).
- See, e.g., Complaint, Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023).
- See, e.g., Complaint, The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
- See Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. filed June 28, 2023); In re Google Generative AI Copyright Litigation, No. 5:23-cv-03440 (N.D. Cal. filed July 11, 2023)
- Notice of Inquiry, 88 Fed. Reg. 59942 (U.S. Copyright Office Aug. 30, 2023), https://www.regulations.gov/document/COLC-2023-0006-0001.
- U.S. Copyright Office, Copyright and Artificial Intelligence Part 1: Digital Replicas (2023), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-1-Digital-Replicas-Report.pdf.
- Id.
- See 17 U.S.C. § 1202.
- See Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. filed Oct. 30, 2023); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
- Raw Story Media, Inc. v. OpenAI Inc., 1:24-cv-01514, (S.D.N.Y.).
- See, for example, Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. filed Oct. 30, 2023); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023); Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. filed June 28, 2023); In re Google Generative AI Copyright Litigation, No. 5:23-cv-03440 (N.D. Cal. filed July 11, 2023).
- Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. filed Oct. 30, 2023).
- See Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal. filed Nov. 3, 2022); Getty Images, Inc. v. Stability AI, Inc., No. 1:23-cv-00135-JLH (D. Del. filed Feb. 3, 2023); Concord Music Grp., Inc. v. Anthropic PBC, No. 3:23-cv-01092 (M.D. Tenn. filed Oct. 18, 2023); Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. filed Oct. 30, 2023); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
- Plaintiffs in one suit were successful in their motion to compel OpenAI to produce employee text messages and direct messages. Opinion & Order re: Letter Motion to Compel, Authors Guild v. OpenAI, Inc., No. 291:23-cv-08292 (S.D.N.Y. Dec. 2, 2024); In another pending suit, OpenAI’s request to compel additional information about plaintiffs’ pre-suit ChatGPT testing was denied. Order Granting Plaintiffs’ Motion for Relief from Discovery Order, Tremblay v. OpenAI, Inc., No. 157:23-cv-03223 (N.D. Cal. Aug. 8, 2024).
- Defendants’ Answer to First Consolidated Amended Complaint, Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. Aug. 27, 2024); OpenAI Defendants’ Answer to First Consolidated Class Action Complaint, Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. Feb. 16, 2024); Defendant Anthropic PBC’s Opposition to Plaintiffs’ Renewed Motion for Preliminary Injunction, Concord Music Grp., Inc. v. Anthropic PBC, No. 5:24-cv-03811 (N.D. Cal. Aug. 22, 2024).
- Defendants’ Answer to First Consolidated Amended Complaint, Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. Aug. 27, 2024 ); OpenAI Defendants’ Answer to First Consolidated Class Action Complaint, Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y, Feb. 16, 2024); Defendant Bloomberg L.P.’s Reply Memorandum in Support of Motion to Dismiss, Huckabee v. Bloomberg L.P., No. 1:23-cv-09152 (S.D.N.Y. May 3, 2024); Defendant GitHub’s Answer to Second Amended Complaint in Consolidated Actions, Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal. July 27, 2024).
- Defendants’ Answer to First Consolidated Amended Complaint, Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. Aug. 27, 2024); OpenAI Defendants’ Answer to First Consolidated Class Action Complaint, Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. Feb. 16, 2024); Defendant Nvidia Corporation’s Answer to Complaint, Nazemian et al. v. Nvidia Corp., No. 24-01454 (N.D. Cal. May 24, 2024).
- Defendants’ Answer to First Consolidated Amended Complaint, Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. Aug. 27, 2024); OpenAI Defendants’ Answer to First Consolidated Class Action Complaint, Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. Feb. 16, 2024).
- See supra 6.
- Id. at 28, 39-41.
- See Regina Sam Penti, Georgina Jones Suzuki & Derek Mubiru, Trouble Indemnity: IP Lawsuits In The Generative AI Boom, Law360 (Jan. 3, 2024, 4:24 PM), https://www.law360.com/articles/1779936/trouble-indemnity-ip-lawsuits-in-the-generative-ai-boom.
- See, e.g., AWS Service Terms, Section 50.10.2 (“AWS will have no obligations or liability [for an Indemnified Generative AI Service] with respect to any claim: (i) arising from Generative AI Output generated in connection with inputs or other data provided by you that, alone or in combination, infringe or misappropriate another party’s intellectual property rights[.]”
- See X Corp. v. Bright Data Ltd., C 23-03698 WHA (N.D. Cal. May. 9, 2024).
Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.