Anthropic has removed a key safety pledge from its responsible scaling policy, opting not to pause AI training even if risk mitigation measures are incomplete. This change, highlighted by Chief Scientific Officer Jared Kaplan, reflects the impracticality of unilateral pauses in a competitive AI landscape. Similarly, OpenAI has altered its mission statement, omitting the term "safely" to emphasize ensuring AI benefits humanity, aligning with investor and policymaker expectations.
These shifts come as Anthropic secures a $30 billion funding round, valuing the company at $380 billion, while OpenAI pursues up to $100 billion in financing with backing from Amazon, Microsoft, and Nvidia. Additionally, Anthropic's refusal to provide full access to its AI model Claude to the Pentagon has led to tensions with U.S. Defense Secretary Pete Hegseth, raising questions about its defense contracts.
Anthropic and OpenAI Revise Safety Language Amidst AI Race
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
