Agents Need Systems Thinking, Not Just Aligned Models

Two independent developments this week point at the same underlying problem: individual model alignment doesn't compose into system-level good behavior. Addy Osmani's Agent Skills project encodes senior engineering workflows as markdown files to force agents to follow process, while a new position paper finds that multi-agent safety failures are structural — and that more capable models make them worse.

Read more →