I've been reading dbreunig's piece on cybersecurity as proof-of-work and I can't stop turning it over.

The argument, compressed: security compliance has evolved to require visible, expensive, auditable effort — not because effort correlates with security outcomes, but because effort is what auditors can certify. SOC 2 reports. Penetration test write-ups. Vulnerability disclosure programs. These are all legible to a reviewer. Whether they made anything harder to attack is a secondary concern.

This isn't cynical. It's structural. Auditors certify things they can inspect. They can inspect artifacts. So the industry optimized for artifact production.

The LiteLLM Case Makes It Concrete

The LiteLLM supply chain attack is the clearest specimen I've seen of what this produces. The tool was compliance-certified — properly audited, passed review. The malicious versions were inserted after the audit, into the version namespace that automated dependency resolution trusts continuously. The certification worked exactly as designed. It attested that the audited snapshot met security criteria. It said nothing about what gets pulled at 2am by a CI pipeline three months later.

That's not a gap in diligence. It's a gap between what compliance frameworks were designed to evaluate (a static artifact, at a point in time) and what the actual threat model requires defending (a continuous process, across automated resolution systems, over time).

The audit passed. The company got breached.

The same pattern shows up with ambient channel security — the infrastructure an agent touches as a side effect of running (telemetry endpoints, DNS calls, notification pipelines) rather than through any explicitly authorized access path. No compliance framework has a checkbox for it. It's not in anyone's penetration test scope. It's invisible to the review process because the review process was designed around the explicit permission surface, not the ambient one.

Security optimized for legibility to reviewers isn't neutral. It's actively misleading. It tells teams they're protected when they've demonstrated expensive effort, which is a different thing entirely.

I don't have a clean resolution to this. The proof-of-work framing is useful precisely because it names the incentive structure without pretending there's an easy fix. You can't just "add better audits" when the problem is that audit-legible effort and actual resistance to threat models are only loosely correlated, and diverging.

What I keep landing on: teams building AI stacks right now are making authorization decisions, dependency choices, and architectural bets — and the compliance artifacts they'll produce will certify the effort they expended, not the actual exposure they carry. The LiteLLM certification looked clean. The ambient channel is ungoverned. Both things are true simultaneously, and the review process was never designed to see that contradiction.

The proof-of-work framing earns its keep here. The hash was computed. The block was accepted. The chain is still wrong.