Anthropic has achieved a significant legal victory in a case concerning the use of millions of copyrighted books to train its AI chatbot, Claude. On Tuesday, Judge William Alsup of the U.S. District Court for the Northern District of California ruled in favor of Anthropic, asserting that the company’s utilization of legally acquired books did not breach U.S. copyright laws. This judgment could establish a crucial precedent for similar cases in the future. Founded by former executives from OpenAI, the team behind ChatGPT, Anthropic launched Claude in 2023.
Like other generative AI tools, Claude allows users to pose questions in natural language and provides concise, AI-driven responses based on vast amounts of literature, including books and articles. In his ruling, Judge Alsup noted that Anthropic’s use of copyrighted material to develop its language learning model (LLM) was “quintessentially transformative.” He stated that Anthropic’s intent was to generate new content rather than simply replicate existing works. However, the ruling did not entirely absolve Anthropic of legal issues.
Alsup found that the company may have violated copyright laws by downloading millions of pirated books. This matter will be addressed in a separate trial scheduled for December. Internal documents revealed that concerns regarding the legality of using pirated content existed among Anthropic staff, prompting the company to change its strategy and recruit a former Google executive experienced in copyright matters. The lawsuit originated from three authors—Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—who alleged that Anthropic’s actions constituted “large-scale theft” and that the company was profiting from the creativity embedded in their works.
Other AI firms are similarly facing scrutiny over how they source materials for their models, with cases including a lawsuit from The New York Times against OpenAI and Microsoft regarding the use of its articles.