Anthropic has accused three Chinese AI labs—DeepSeek, Moonshot, and Minimax—of conducting large-scale distillation attacks on its language model, Claude. The company claims these labs orchestrated campaigns involving over 16 million exchanges through approximately 24,000 fraudulent accounts to replicate Claude's capabilities. Distillation, a common AI training method, becomes problematic when used to replicate advanced features without incurring development costs.
Anthropic warns that such activities pose geopolitical risks, potentially enabling authoritarian regimes with advanced cyber capabilities. The company plans to enhance detection systems, share threat intelligence, and tighten access controls, urging industry cooperation to counter these threats. The incident highlights the need for robust security measures in AI models, especially as they become integral to automated systems in various sectors, including crypto markets.
Anthropic Accuses Chinese AI Labs of Large-Scale Distillation Attacks
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
