AI security company Plurai has unveiled the BARRED framework, which enhances AI safety by generating synthetic training data for customized content guardrails. The framework enables the Qwen2.5-3B model, with 3 billion parameters, to outperform OpenAI's OSS-Safeguard-20B model, which has 20 billion parameters, in tasks such as dialogue strategy, agent output validation, and medical compliance. The BARRED framework decomposes tasks into multiple dimensions and uses an "asymmetric debate" process to refine edge-case samples, significantly improving accuracy. The evaluation code and dataset are available on GitHub and Hugging Face.