OpenAI and Anthropic have taken a stand against the Pentagon's demands for unrestricted use of AI technologies. Anthropic, which developed the AI model Claude, refused to comply with the Pentagon's ultimatum to remove contractual restrictions against mass surveillance and autonomous weapons, risking a $200 million contract. In response, the Pentagon labeled Anthropic a "supply chain security risk," effectively barring its technology from military use.
In a surprising turn, OpenAI, initially aligned with Anthropic's ethical stance, signed a contract with the Pentagon under similar conditions but without Anthropic's additional safeguards. This move highlights the complex dynamics between AI companies and government contracts, as OpenAI secured the position left by Anthropic, raising questions about the balance between ethical principles and business opportunities.
OpenAI and Anthropic Clash with Pentagon Over AI Ethics
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
