A joint security report by SlowMist and Bitget warns of potential security risks associated with using AI agents in Web3 environments. The report highlights that during automated development processes, AI agents may access configuration files for debugging, log analysis, or dependency installation. Without clear ignore strategies or access controls, sensitive information could be logged, sent to remote APIs, or exposed by malicious plugins. The report emphasizes that unlike traditional software systems, many operations in Web3, such as on-chain transfers, token swaps, liquidity additions, and smart contract calls, are irreversible. Once a transaction is signed and broadcasted, it is typically difficult to reverse or roll back. This amplifies the security risks when AI agents are used for on-chain operations, necessitating heightened vigilance and security measures.