AI models are being evaluated for their ability to detect prompt injection attacks, with 17 different injection methods tested. The exercise aims to identify potential vulnerabilities in models that fail to recognize these injections, highlighting areas for improvement in AI security measures.