Vitalik Buterin has publicly praised AI company Anthropic for its ethical commitment to not develop fully autonomous weapons and to avoid mass surveillance in the United States. Buterin highlighted Anthropic's resolve to maintain these principles despite facing pressure from government entities. He advocates for limiting high-risk AI applications to open-source access, suggesting that even a 10% improvement in this area could mitigate risks associated with autonomous weapons and privacy breaches. This commendation follows reports that the Pentagon threatened to end its partnership with Anthropic, risking a $200 million contract, after the company refused to supply AI technology for military use without human oversight.