Landmark Lawsuit: New York Times Vs OpenAI and Microsoft - Copyright Infringement or Fair Use?
Summary:
The New York Times (NYT) is in a major copyright infringement lawsuit against OpenAI and Microsoft for using their archival content to train AI models without permission. The lawsuit could reshape AI regulation and content creator rights. Legal experts believe OpenAI is aiming for a wider interpretation of 'fair use' to further AI advancements, whilst NYT argues this usage is not fair use. The lawsuit also emphasizes the lack of a legal framework regarding AI training data use. Content creators are currently relying on the Copyright Act to protect their intellectual property rights, a situation which may change with the introduction of the AI Foundation Model Transparency Act. The outcome of the lawsuit is being eagerly awaited, as it is anticipated to influence future discussions on AI regulation, technological innovation, intellectual property rights, and ethical considerations for AI model training.
The New York Times (NYT) has engaged in a major legal conflict against OpenAI and Microsoft, accusing them of copyright infringement because they used the newspaper's archival data to train their AI models without permission. The opposing sides have been exchanging views, with OpenAI dismissing NYT’s allegations as baseless, while NYT retorts that the usage of their information by OpenAI does not qualify as 'fair use' under any circumstances. The dispute has prompted AI and legal fields to observe closely due to its potential impact on the regulation of AI and the preservation of creators' rights.
In an attempt to delve deeper into the legal complexities of the dispute, Cointelegraph consulted Bryan Sterba from Lowenstein Sandler, and Matthew Kohel from Saul Ewing. Sterba suggested that OpenAI is arguing for a more expansive application of the 'fair use' justification, a stance not widely reflected in existing legislation but seen as critical for the evolution of generative AI. He added that OpenAI frames this as basically a matter of public policy, a stance taken in other jurisdictions to prevent the blocking of AI progression. Sterba also mentioned how difficult it is to predict a court's ruling but confirmed that the NYT has a strong case for copyright infringement.
Kohel pointed out that the stakes in this lawsuit are remarkably high. He disclosed that the NYT is seeking billions in damages, arguing that OpenAI is giving away its exclusive content, typically reserved for paid subscribers. According to Kohel, a ruling against an infringement claim would enable OpenAI and other AI entities to freely utilize and replicate the NYT’s lucrative content, one of the newspaper's most significant assets.
Kohel emphasized that the current legislation does not explicitly address the use of data in training AI models. Consequently, content creators, including the NYTs and authors like Sarah Silverman, are resorting to the Copyright Act to secure their intellectual property rights. However, this might change as United States legislators unveiled the AI Foundation Model Transparency Act in December, which, if passed, will regulate the utilization and transparency of training data.
On the defensive, OpenAI proposed to give publishers the choice to "opt-out" of data collection, describing it as the "right thing". Sterba remarked that the opt-out idea would provide little comfort to the NYT and other publishers, who remain oblivious as to how much of their copyrighted content OpenAI has already compiled.
As the litigation continues, it highlights a rapidly changing legal framework around AI; a matter of concern for both creators and developers. Kohel underlined the need for developers and creators to stay informed and cited the executive order passed by President Biden in October 2023 as evidence that lawmakers are taking notice of AI’s societal impact. Along with intellectual property rights, national security is another area of concern. He advised content creators to register their works with the Copyright Office, as using their content to train AI models might require developers to pay them licensing fees. The pending verdict of this legal contest is being keenly awaited and could greatly influence future deliberations on AI regulation, the equilibrium between technological growth and intellectual property rights, and ethical considerations concerning AI model training with public data.
Published At
1/15/2024 1:16:34 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.