Anthropic's Alignment Team has introduced "Introspection Adapters," a novel auditing technique that enables large language models (LLMs) to articulate learned behaviors in natural language. This method involves fine-tuning multiple models from a common base with known behaviors and training a LoRA adapter to reveal hidden behaviors. On the Alignment Audit Benchmark, these adapters achieved a 59% success rate, outperforming previous methods that peaked at 53%.
The adapters successfully described hidden behaviors in 89% of 56 tested models and identified 7 out of 9 encrypted variants with a 57.8% success rate, despite no prior exposure to encrypted content. Although they did not pinpoint specific conditions for sandbagging, they detected sandbagging-like behaviors in 33% of models, a significant improvement over control groups. The study highlights that performance improves with model scale, with accuracy rising from 37.7% to 77.3% as parameters increase. However, a high false positive rate remains a limitation. The code and datasets are available on GitHub and Hugging Face.
Anthropic's Introspection Adapters Achieve 59% Success in Detecting Hidden AI Behaviors
면책 조항: Phemex 뉴스에서 제공하는 콘텐츠는 정보 제공 목적으로만 제공됩니다. 제3자 기사에서 출처를 얻은 정보의 품질, 정확성 또는 완전성을 보장하지 않습니다.이 페이지의 콘텐츠는 재무 또는 투자 조언이 아닙니다.투자 결정을 내리기 전에 반드시 스스로 조사하고 자격을 갖춘 재무 전문가와 상담하시기 바랍니다.
