Anthropic has accused three Chinese AI labs—DeepSeek, Moonshot, and Minimax—of conducting large-scale distillation attacks on its language model, Claude. The company claims these labs orchestrated campaigns involving over 16 million exchanges through approximately 24,000 fraudulent accounts to replicate Claude's capabilities. Distillation, a common AI training method, becomes problematic when used to replicate advanced features without incurring development costs. Anthropic warns that such activities pose geopolitical risks, potentially enabling authoritarian regimes with advanced cyber capabilities. The company plans to enhance detection systems, share threat intelligence, and tighten access controls, urging industry cooperation to counter these threats. The incident highlights the need for robust security measures in AI models, especially as they become integral to automated systems in various sectors, including crypto markets.