Jan Leike, a prominent figure in AI alignment research, has joined Anthropic to lead its Alignment Science team. Leike, who left OpenAI in May 2024 citing safety concerns, is now spearheading efforts to tackle complex AI safety challenges at Anthropic. His team is focusing on scalable oversight, weak-to-strong generalization, robustness to jailbreaks, and automating alignment research. Leike's move to Anthropic, a company founded by former OpenAI researchers, underscores a commitment to AI safety. His work is influencing the broader AI safety landscape, with his research shaping industry agendas on alignment techniques.