Anthropic's latest research highlights the significant on-chain attack capabilities of AI agents. In simulations of real smart contract hacks from 2020 to 2025, AI models Claude Opus 4.5, Sonnet 4.5, and GPT-5 collectively replicated vulnerabilities worth approximately $4.6 million. Additionally, while scanning 2,849 contracts with no known vulnerabilities, the models discovered two new zero-day vulnerabilities and successfully simulated profitable exploits.
The study indicates that the profitability of AI-driven on-chain attacks has doubled approximately every 1.3 months over the past year, demonstrating that AI technology is now fully capable of autonomously exploiting vulnerabilities for profit.
Anthropic Study Reveals AI Agents' Potent On-Chain Attack Capabilities
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
