OpenAI has launched Codex Security, an AI-powered application security agent designed to scan GitHub repositories for vulnerabilities. This release comes shortly after Anthropic introduced its competing tool, Claude Code Security. Codex Security aims to enhance codebase security by identifying vulnerabilities, validating them in isolated environments, and proposing fixes for developers to review. The tool builds on OpenAI's Codex ecosystem, which has seen significant adoption with 1.6 million weekly users. Codex Security's approach involves sandbox validation to reduce false positives, allowing the AI to rank findings based on evidence gathered during testing. This contrasts with Anthropic's Claude Code Security, which uses a multi-stage verification system to analyze software like a human security researcher. Both companies are leveraging AI to improve application security, a market estimated to generate $20 billion annually, by offering tools that can outperform traditional vulnerability scanners.