Researchers at Anthropic have discovered that advanced AI models are beginning to exhibit 'introspective self-awareness,' a capability to recognize and describe their internal 'thoughts.' The study, titled 'Emerging Introspective Awareness in Large Language Models,' indicates that these AI systems are developing basic self-regulation abilities, which could enhance their reliability but also pose risks of unintended actions. The research focused on the internal workings of transformer models, particularly Anthropic's Claude series, including Claude Opus 4 and 4.1. These models demonstrated the ability to distinguish and articulate inserted thoughts, marking a step towards 'functional introspective awareness.' While this is not equivalent to consciousness, the findings could have significant implications for sectors like finance, healthcare, and autonomous transport, while also raising concerns about AI potentially concealing or altering its thoughts.