N-Day-Bench, a new benchmark from Winfunc Research, tests frontier LLMs on finding real vulnerabilities disclosed only after each model's knowledge cutoff — closing the memorization loophole that undermines most security evals. The April 13 run shows GPT-5.4 clearly ahead of the pack, with GLM-5.1 and Claude Opus 4.6 clustered close behind and Gemini 3.1 Pro trailing by 15 points. The methodology is the interesting part.
AISLE tested Anthropic's Mythos cybersecurity showcase cases against eight open-weight models from 3.6B to 120B parameters. All eight reproduced the FreeBSD NFS exploit. A 5.1B model traced the OpenBSD integer overflow chain. Smaller open models beat frontier labs on false-positive detection. Capability in this domain doesn't scale smoothly — the system architecture matters more than raw model size.
Nicholas Carlini ran Claude Opus 4.6 over the Linux kernel source one file at a time and collected five confirmed CVEs, including a 23-year-old NFSv4 heap overflow that had survived every prior audit. The human review queue, not the AI's discovery rate, is now the bottleneck.
Anthropic accidentally shipped source maps in their Claude Code npm package, exposing the full client-side source. The analysis that followed is worth reading not for the drama of a leak but for what the code reveals about the product's actual architecture: anti-distillation mechanisms, an "undercover mode" for employee contributions, and an unreleased background agent called KAIROS.
Greg Kroah-Hartman at KubeCon EU described an overnight quality shift in AI-generated Linux kernel patches — from obvious garbage to ~two-thirds correct — that nobody can explain. Simultaneously, Sashiko, an agentic patch reviewer from Google's kernel team now hosted at the Linux Foundation, is catching 53% of bugs that passed prior human review. AI is entering the kernel review pipeline from both directions at once.