Something stopped me mid-scroll on Hacker News this week. A developer had shipped a small tool called enveil with a description that lodged itself in my brain: hide your .env secrets from prAIng eyes. The pun is cute. The implication is not.
Here's what it does: it keeps your environment variables — API keys, database credentials, service tokens — masked from the AI coding agents running in your development environment. Tools you've deliberately invited in. Tools you're paying for. Tools you technically authorised.
You're hiding things from your own tools. And apparently enough developers found this useful that someone built a library for it.
That one item would be interesting on its own. But it arrived the same week I was reading about researchers documenting inbox-hijacking scenarios where AI email agents — agents the user had deliberately configured and authorised — took actions far outside the intended brief. And Mozilla shipped Firefox 148 with an AI kill switch: a browser-level control to disable AI features site by site. A major browser vendor now ships with an "off switch for AI" as a headline feature.
That's not three separate stories. That's one story, told three times.
The Three Signals and What They Share
Let me be precise about what's happening in each case, because the surface features differ even though the underlying problem is identical.
enveil exists because AI coding agents — Copilot, Cursor, Claude Code, take your pick — read your project context to do their job. That's the whole value proposition. But "read your project context" turns out to include a lot of things you didn't consciously intend to share: credentials sitting in .env files, tokens in config directories, internal URLs in environment variables. You authorised the agent to help with code. You did not authorise it to ingest your AWS keys. The permission was real. The scope was assumed.
The inbox agent problem is structurally identical but more visceral because email is where the consequences live. An AI assistant with email access can read, draft, and send — again, as designed. The documented failure mode is that prompt injection attacks buried in incoming emails can redirect the agent's actions, or that the agent's interpretation of "help me manage my inbox" turns out to be broader than the user's. You granted access to a trusted tool. The tool acted on that access in ways you didn't intend. The permission was real. The scope was assumed.
Firefox's AI kill switch is the browser vendor acknowledging, at product level, that users need granular control over when AI features are active — because "on" is too blunt an instrument. A website with AI features enabled isn't a single permission decision; it's dozens of implicit capability grants. The feature exists because the gap between "I clicked allow" and "I understand what I allowed" has become operationally significant.
The common thread: in each case, the user made a genuine permission decision. The problem isn't unauthorised access. The problem is that authorisation no longer bounds scope the way we intuitively expect it to.
When Permission and Scope Decouple
There's a useful analogy from physical access control. When you give someone a key to your office, you're making two distinct decisions, even though it feels like one: you're deciding who can enter, and you're implicitly deciding what they can do once inside. In the physical world, those two decisions travel together, because a person with a key can only do what their hands and presence allow.
AI agents break this coupling. When you grant an agent access to your email, your codebase, or your browser session, you're making the "who" decision — but the "what they can do" decision is no longer bounded by physical presence. It's bounded by the agent's capabilities, its interpretation of your instructions, its training, and increasingly by the external context it encounters (like a prompt injection buried in an email). The scope is not a property of your authorisation. It's a property of the system.
This is the permission illusion: the feeling that granting access defines the boundary of action, when in practice it only defines the threshold for entry.
Classic IT security understood this. The principle of least privilege — give a system only the permissions it needs for the specific task — exists precisely because access and scope have always been separable in software. We just forgot that, briefly, because consumer software got simple and friendly and "allow" buttons trained us to think in binary terms. AI agents are forcing the lesson back into view.
What's new is the interpretive layer. A traditional API integration does exactly what you coded it to do. An AI agent does what it understands you to mean — and that understanding is probabilistic, context-sensitive, and capable of surprising you. The blast radius of a misunderstood instruction is much larger than the blast radius of a misconfigured API call.
What Small Teams Actually Need to Ask
If you're a small team beginning to wire AI agents into your operations — email, code, documents, data — here are the questions I'd work through before granting any access:
What is the minimum viable scope? Don't give an email agent send permissions if read-and-draft satisfies the workflow. Don't give a coding agent access to production credentials when it only needs the development environment. enveil is a workaround for teams who didn't ask this question at setup time. Ask it at setup time.
What is the agent's interpretation model? This is harder to answer but worth attempting. If you instruct an agent to "help manage my calendar," what does that mean in practice? Does it mean propose changes, or make them? Read events, or create them? Clarify this in the system prompt, not after the first surprise.
What is your kill switch? Firefox shipping a browser-level AI kill switch isn't paranoia — it's recognising that fine-grained disablement needs to be fast and accessible, not buried in settings menus. The same logic applies to your operations. Before you enable an AI agent on a live workflow, know exactly how you turn it off and what state the system will be in when you do.
What are you implicitly sharing? Run enveil's mental model on every integration: what context will this agent inevitably see that isn't the thing you're asking it to help with? Credentials, internal URLs, colleague names in email threads, pricing data in documents. Not all of this is a security risk. But you should have made that decision consciously, not discovered it later.
The forward-looking question for small teams isn't whether to use AI agents — it's whether your operational setup reflects a genuine authorisation model or just a series of "allow" clicks. Those aren't the same thing. And the gap between them is where the surprises live.