Jan Leike, a prominent figure in AI alignment research, has joined Anthropic to lead its Alignment Science team. Leike, who left OpenAI in May 2024 citing safety concerns, is now spearheading efforts to tackle complex AI safety challenges at Anthropic. His team is focusing on scalable oversight, weak-to-strong generalization, robustness to jailbreaks, and automating alignment research.
Leike's move to Anthropic, a company founded by former OpenAI researchers, underscores a commitment to AI safety. His work is influencing the broader AI safety landscape, with his research shaping industry agendas on alignment techniques.
Jan Leike Joins Anthropic to Lead AI Safety Research
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
