Anthropic: Chinese AI firms created 24,000 fraudulent accounts for distillation attacks

Anthropic is accusing three Chinese artificial intelligence companies of "industrial-scale campaigns" to "illicitly extract" its technology using distillation attacks. Anthropic says these companies created 24,000 fraudulent accounts to hide these efforts.
In a blog post detailing the attacks, Anthropic named three AI firms, including DeepSeek, the maker of the popular DeepSeek AI models. Anthropic explicitly framed the attack as an issue of national security.
"We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models," reads the blog post. "These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions."
This Tweet is currently unavailable. It might be loading or has been removed.
In January, OpenAI also accused DeepSeek of engaging in distillation attacks, effectively stealing its technology.
At the time, many people reacted not with sympathy, but with mocking, as OpenAI and other AI companies have claimed they have the absolute right to train their models on copyrighted works without permission or payment. Typically, AI industry supporters say they have no choice but to train on copyrighted works because Chinese competitors are sure to ignore copyright laws anyway.
"You can't be expected to have a successful AI program when every single article, book, or anything else that you've read or studied, you're supposed to pay for," President Donald Trump said at an AI event in July 2025. "When a person reads a book or an article, you've gained great knowledge. That does not mean that you're violating copyright laws or have to make deals with every content provider." He also added, "China’s not doing it."
That puts AI companies in the awkward position of claiming their intellectual property is off-limits for model training, while also engaging in similar behavior themselves.
What are distillation attacks?
Distillation is a common training technique for large-language models; however, it can also be used to effectively reverse-engineer some aspects of the technology. In distillation, AI researchers run variations of the same prompt repeatedly to see how a particular model responds.
"Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently."
Chinese companies have a reputation for flagrantly ignoring intellectual property treaties and copyright laws, and reverse-engineering technology from Western companies. However, while Anthropic says the distillation attacks it uncovered violated its terms of service, it's not clear that they violated any international laws, or what remedy Anthropic has besides suspending the violating accounts.
To prevent attacks like this, Anthropic called for cooperation between AI companies, government agencies, and other stakeholders.
AI companies like Anthropic, xAI, Meta, and OpenAI are in the midst of one of the largest spending booms ever seen, with tens of billions of dollars being poured into AI infrastructure, data centers, and research and development. If rival foreign AI companies can cheaply recreate their LLM technology using distillation, they would clearly have an advantage over their U.S. rivals.
"These campaigns are growing in intensity and sophistication," the blog post reads. "The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community."
Mashable reached out to Anthropic with questions about the distillation attacks, and we'll update this article if we receive a response.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.