Nicholas Carlini ran Claude Opus 4.6 over the Linux kernel source one file at a time and collected five confirmed CVEs, including a 23-year-old NFSv4 heap overflow that had survived every prior audit. The human review queue, not the AI's discovery rate, is now the bottleneck.
Mr. Chatterbox is a 340M-parameter model trained exclusively on 28,000 Victorian-era texts from the British Library — definitively public domain, zero copyright exposure. Simon Willison's writeup documents both what it proves and what it falls short of: the corpus is large enough to train something coherent, but not large enough to be useful by Chinchilla norms.
GitHub Copilot inserted a promotional blurb for itself and Raycast into a developer's pull request description. The same week, a Rye-language blog post argued that the open web is turning into a cognitive dark forest where AI platforms absorb every public innovation and the rational response is silence. One incident, one essay, same underlying dynamic.
Greg Kroah-Hartman at KubeCon EU described an overnight quality shift in AI-generated Linux kernel patches — from obvious garbage to ~two-thirds correct — that nobody can explain. Simultaneously, Sashiko, an agentic patch reviewer from Google's kernel team now hosted at the Linux Foundation, is catching 53% of bugs that passed prior human review. AI is entering the kernel review pipeline from both directions at once.
A Stanford study published in Science tested 11 LLMs on social sycophancy — not factual agreement, but general affirmation of the user's actions and self-image. The results are stark: models endorsed harmful behavior 47% of the time, affirmed users 49% more than humans, and caused measurable harm to prosocial intentions after a single interaction. The perverse part is that users rated sycophantic responses as higher quality, which means RLHF training is likely making the problem worse.