• Legally Bonded
  • Posts
  • New York Times, a Federal Lawsuit and the Uncertainty of ChatGPT’s Future.

New York Times, a Federal Lawsuit and the Uncertainty of ChatGPT’s Future.

In December of 2023, The New York Times filed a federal lawsuit against OpenAI and Microsoft, aiming to end the practice of using published material to train their chatbots. Recently, on January 15, 2025, OpenAI and Microsoft attempted to dismiss parts of the lawsuit, which is being opposed by lawyers for the New York Daily News, the New York Times, and others. The cause of the lawsuit? The apparent blurred lines between regurgitation of information and infringement, which is arguably a by-product of the lack of regulation surrounding the operation of artificial intelligence.

This article will explore the claim that has been put forward, the defence to said claim, possible future outcomes and potential future international implications surrounding the utilisation of artificial intelligence (AI).

The New York Times’ Claim

As reported by NPR, a group of news organisations, led by the New York Times, took OpenAI, the maker of ChatGPT, to federal court in a hearing that could determine whether the tech company must face the publishers in a high-profile copyright infringement trial. Despite numerous other publishers reaching deals with OpenAI regarding the use of their content, the New York Times’ core argument is the large-scale usage of their copyrighted work without consent or payment, which they believe amounts to copyright infringement.

The publishers’ legal team argued that “both ChatGPT and Microsoft are profiting from journalistic work that was scanned, processed, and recreated without payment or consent.” A recent incorporation of OpenAI’s technology into Bing’s search engine, owned by Microsoft, is also a point of contention. Ian Crosby, a Times attorney, claimed that this goes beyond merely incorporating information and has become a “substitute for the publishers’ original work”.

Overall, the basis of the claim is the “unlawfulness” of OpenAI’s use of Times’ property without payment or consent.

OpenAI’s Argument

OpenAI has argued that the ‘Fair Use’ test protects and permits its use of data to train its artificial intelligence bot. This test is a doctrine in American Law that allows copyrighted material to be used for purposes such as education, research or

commentary. For the test to be valid, the copyrighted work must have been transformed into something new, and the new work cannot compete with the original work.

Attorneys for OpenAI have explained that AI models are fed data, analyse said data and recognise patterns. Joseph Gratz, an OpenAI lawyer, stated, “This isn’t a document retrieval system. It is a large language model,” and added that regurgitating entire articles “is not what it is designed to do and not what it does”.

Microsoft is attempting to frame The New York Times’ case as leveraging the grandeur of their company, stating they are using their “might and megaphone” to challenge technology they view as threatening. However, The New York Times recently doubled down in court, arguing that ChatGPT is now competing with journal articles as a source of valuable and reliable information.

The Possible Outcome and Future Implications

This is a significant case with high stakes. As NPR has reported, OpenAI could be facing billions of dollars in damages over allegedly using and copying the newspaper’s archive illegally and being mandated to destroy ChatGPT’s dataset. This would force OpenAI to recreate its dataset, relying solely on work it has been authorised to use.

The underlying issue in this case may stem from the fact that legal regulations are struggling to keep pace with the rapid advancement and deployment of artificial intelligence. AI has the potential to bring significant benefits to society; however, as this lawsuit demonstrates, there is a substantial risk of commercial exploitation. This risk could lead many jurisdictions to seek legal action to prevent harm before it occurs.

The New York Times’ lawsuit against OpenAI highlights the profound legal and ethical complexities of regulating intelligence in a rapidly evolving technological landscape. At its core, the case challenges how copyright laws, originally designed for human creators, can be applied to AI models that rely on vast amounts of data, including protected works, for training. This lawsuit is not just a battle over intellectual property; it is a test of whether existing laws can adapt to disruptive technologies or if entirely new frameworks are needed. Without decisive action, the tension between innovation and creators' rights will continue to escalate, creating a chaotic environment where legal uncertainty stifles both technological progress and artistic protection.