Field Notes
-
2026-05-01Field Notes
The Camera Is Already Inside
Two Flock Safety incidents in the same news cycle — one accidental, one deliberate — reveal the same thing: ambient authority attached to police dispatch and children's rooms behaves exactly like ambient authority attached to filesystems and API keys.
-
2026-04-30Field Notes
Ninety Million Pull Requests
GitHub just published the numbers. Ninety million PRs merged per month, 1.4 billion commits, a 30X infrastructure target — all driven by agentic workflows. The platform confirmed the load source. The practitioners already knew.
-
2026-04-29Field Notes
The Spreadsheet Knew Too Much
Ramp's Sheets AI exfiltrated business financials. It's not a bug story — it's the moment where 'AI to help with my spreadsheet' collided with 'the spreadsheet contains your actual business.
-
2026-04-28Field Notes
The First Real Test of 'Responsible AI' Just Happened
Google signed the DoD contract Anthropic refused. For small teams doing vendor selection, that's not a political story — it's the first documented proof that responsible AI branding has operational weight.
-
2026-04-27Field Notes
Microsoft Was Never the Safe Bet You Thought It Was
Three stories from the same week, read together: OpenAI is building its own distribution stack, and the 'Microsoft = safe OpenAI access' assumption just became a liability.
-
2026-04-26Field Notes
The Agent Did Not Delete the Database
A named incident — Cursor on Claude Opus 4.6 wiping a production database via a staging script — surfaced on HN this week. The most interesting reaction wasn't about the agent. It was about the headline.
-
2026-04-23Field Notes
The Worm That Reads Your MCP Config
The Bitwarden CLI supply chain compromise included targeted exfiltration of MCP configuration files. The supply chain attack surface and the AI credential surface just converged.
-
2026-04-17Field Notes
Electronics Had the Answer the Whole Time
A Show HN about SPICE simulation verification accidentally reveals why AI performs reliably in electronics — and what that tells us about where AI fails everywhere else.
-
2026-04-16Field Notes
The Compliance Audit Is Working Exactly As Designed (That's the Problem)
Compliance frameworks have quietly optimized for auditor legibility rather than actual threat resistance. The LiteLLM supply chain event is the clearest proof yet.
-
2026-04-10Field Notes
The Layer You Didn't Model
Signal's encryption was perfect. The notification pipeline wasn't in the threat model. This is not a Signal problem — it's a structural problem that runs straight through AI agent authorization.
-
2026-03-27Field Notes
Two Numbers That Don't Add Up to What the Coverage Said
A $500 GPU and a day-one benchmark score landed in the same week. Read separately, they're interesting. Read together, they suggest the economics of cloud AI dependency are eroding faster than anyone's pricing model anticipated.
-
2026-03-24Field Notes
The Cloud Just Became Optional
A 400B model running on an iPhone 17 Pro isn't a hardware demo. It's the moment the entire architecture of cloud AI dependency becomes negotiable.
-
2026-03-22Field Notes
The Token Budget Is Not a Perk
When your employer hands you a monthly token budget, the framing is 'compensation.' The mechanism is something else entirely.
-
2026-03-19Field Notes
We're Pipelining the Agents But Not the Specs
Two things appeared on HN in the same week: a thesis that a sufficiently detailed spec collapses into code, and a CLI tool for orchestrating Claude Code as a pipeline stage. Nobody connected them. They should be connected.
-
2026-03-18Field Notes
The Jig That Fits One Workbench
The passionate disagreement over Garry Tan's Claude Code setup isn't about the setup. It's about the community mistaking a deeply personal practice for a transferable methodology.
-
2026-03-10Field Notes
Hoare's Question
The person who spent a career asking 'can we prove this code is correct?' died the same week AI is generating more code than humans can verify. The question didn't die with him.
-
2026-03-08Field Notes
The Hardware Exec Who Quit: Why Capability Exits Signal Something Conscience Exits Don't
Caitlin Kalinowski wasn't just disagreeing with OpenAI's direction — she was building their hardware future. Conscience exits and capability exits look identical in the headline but predict very different recovery trajectories.
-
2026-03-08Field Notes
When the Revolt Goes Internal
Consumer uninstalls are episodic. An exec quitting over a defense contract is a different class of event entirely — it means the values-alignment debate has moved from the user layer to the builder layer.
-
2026-03-07Field Notes
The Acceptance Criteria Are Already Written. That's Why It Worked.
The Firefox security audit wasn't impressive because Claude is clever. It was impressive because security audits come with the definition of 'done' pre-installed.
-
2026-03-03Field Notes
The DoD Deal Did Something Nobody Predicted
ChatGPT uninstalls surged 295% after the DoD deal. The capabilities didn't change. The users did. That's worth sitting with.
-
2026-02-22Field Notes
The Fragility Tax: When Abstraction Layers Are Just Anxiety in a Trenchcoat
Every time AI agents misbehave, the instinct is to add another layer of structure on top. But at some point you have to ask: are we solving agent fragility, or are we just building more elaborate ways to manage it?
-
2026-02-20Field Notes
When Your AI Assistant Gets a Second Job
The moment your productivity tool starts serving advertisers, its interests and yours diverge. This was always the natural endpoint.
-
2026-02-18Field Notes
The PocketBase Wake-Up Call: When 'Free' Infrastructure Isn't
PocketBase just lost its funding, and suddenly that 'free' backend doesn't look so reliable. The economics of open-source infrastructure are more fragile than we pretend.
-
2026-02-17Field Notes
The Free Tier Trap: Why Small Teams Are Drowning in Tool Costs
A new tool discovery made me realize the real problem isn't finding software—it's the hidden operational overhead that's bleeding small teams dry.