Home Money & Business Creators of Claude AI chatbot sued by authors for copyright violation

Creators of Claude AI chatbot sued by authors for copyright violation

0

A group of authors has filed a lawsuit against the artificial intelligence startup Anthropic, accusing the company of engaging in “large-scale theft” by training its popular chatbot Claude on pirated copies of copyrighted books. This legal action marks the first time writers have specifically targeted Anthropic and its AI chatbot Claude, whereas similar lawsuits have already been brought against competitor OpenAI, the creator of ChatGPT, for over a year.

Anthropic, a smaller company based in San Francisco and founded by former OpenAI leaders, has positioned itself as a responsible developer of generative AI models focused on safety. However, the lawsuit filed in a federal court in San Francisco alleges that Anthropic’s practices contradict its purported morals by utilizing pirated content to enhance its AI product. The lawsuit accuses Anthropic of profiting from the unauthorized use of human expression and creativity present in the pirated works.

The lawsuit was initiated by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who aim to represent a class of authors of fiction and nonfiction books. In addition to the lawsuit filed by these authors, Anthropic is also facing legal action from major music publishers who claim that Claude reproduces copyrighted song lyrics without authorization.

This lawsuit against Anthropic is part of a broader trend of legal challenges within the AI development community in cities like San Francisco and New York. Large tech companies like OpenAI and Microsoft are currently embroiled in copyright infringement cases led by prominent figures such as John Grisham, Jodi Picoult, and George R. R. Martin, as well as legal disputes with media outlets like The New York Times, Chicago Tribune, and Mother Jones.

The crux of these lawsuits is the allegation that tech companies have used vast amounts of copyrighted material to train AI chatbots without proper permission or compensation to the original creators. While companies like Anthropic argue that the training of AI models falls within the “fair use” doctrine of U.S. copyright laws, the lawsuit challenges this argument by claiming that datasets like The Pile employed by Anthropic contain pirated content, emphasizing that AI systems do not learn in the same manner as humans by purchasing or borrowing lawful copies of works.

Anthropic has not provided an immediate comment in response to the lawsuit.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version