OpenAI and Anthropic have taken a stand against the Pentagon's demands for unrestricted use of AI technologies. Anthropic, which developed the AI model Claude, refused to comply with the Pentagon's ultimatum to remove contractual restrictions against mass surveillance and autonomous weapons, risking a $200 million contract. In response, the Pentagon labeled Anthropic a "supply chain security risk," effectively barring its technology from military use. In a surprising turn, OpenAI, initially aligned with Anthropic's ethical stance, signed a contract with the Pentagon under similar conditions but without Anthropic's additional safeguards. This move highlights the complex dynamics between AI companies and government contracts, as OpenAI secured the position left by Anthropic, raising questions about the balance between ethical principles and business opportunities.