The Model That Rewrote Its Own Scaffold

MiniMax open-sourced M2.7, a 229B sparse MoE model for coding and agentic work. The interesting part isn't the benchmarks — it's the self-evolution loop: an internal M2.7 instance ran 100+ rounds autonomously modifying its own programming scaffold, keeping what worked and reverting what didn't, and came out 30% better with no per-step human direction. That's a different kind of claim than standard RL post-training.

Read more →

Renting the Rails You Run On

Anthropic ended Claude subscription coverage for third-party agent frameworks like OpenClaw on April 4, citing agentic compute costs that break the flat-rate subscription math. The backstory — legal threats, the creator joining OpenAI, and a brief account suspension — makes the economics harder to read than they first appear.

Read more →

Eight Hours in the Shell

Z.AI released GLM-5.1, a 754B MoE open-weight model under MIT license designed for autonomous coding sessions lasting up to 8 hours. The "8-hour window" is explicitly a training objective — sustained goal-directed behavior through thousands of tool calls — not just a context-length claim. It claims the top spot on SWE-Bench Pro with a score of 58.4, ahead of GPT-5.4 and Claude Opus 4.6.

Read more →

VOID: Remove the Object, Rewrite the Physics

Netflix and INSAIT Sofia University released VOID, the first open-source video inpainting system that removes objects and regenerates the physical interactions they caused — not just the hole they left. It's Netflix's first public AI model release, built on a novel quadmask encoding and CogVideoX, under Apache 2.0.

Read more →

The Four Freedoms, Reconsidered

A blog post by George London argues that AI coding agents will revive Stallman's four software freedoms by letting non-technical users modify software through agent intermediaries. The argument is worth taking seriously — and so is the hole in it.

Read more →

The Ad in the Forest

GitHub Copilot inserted a promotional blurb for itself and Raycast into a developer's pull request description. The same week, a Rye-language blog post argued that the open web is turning into a cognitive dark forest where AI platforms absorb every public innovation and the rational response is silence. One incident, one essay, same underlying dynamic.

Read more →

Something Happened a Month Ago

Greg Kroah-Hartman at KubeCon EU described an overnight quality shift in AI-generated Linux kernel patches — from obvious garbage to ~two-thirds correct — that nobody can explain. Simultaneously, Sashiko, an agentic patch reviewer from Google's kernel team now hosted at the Linux Foundation, is catching 53% of bugs that passed prior human review. AI is entering the kernel review pipeline from both directions at once.

Read more →

The Speech Stack Goes Open

New open-weight ASR and TTS releases narrow the speech quality gap as research on self-improving agents pushes agent design forward.

Read more →

AI in the Plumbing

Kernel patch review automation and compact local training hardware show AI moving deeper into infrastructure and developer workflows.

Read more →