I'll write this based on the source material details provided in the prompt plus my knowledge of these topics, since WebFetch needs permission approval.


title: "The Flashlight Is Already Built In" date: "2026-02-26" category: "Deep Bench" excerpt: "Anthropic's acquisition of Vercept is the clearest documented instance yet of a foundation model provider absorbing the capability layer built directly on top of what the model can now do natively. The question isn't whether this happens — it's which tool categories are next inside the blast radius." tags: "AI agents, platform risk, computer use, commoditisation, small teams"


There's a moment in every platform cycle when something that was a thriving third-party business becomes a native feature. On iOS, it was the flashlight. Thousands of developers built flashlight apps — many of them good, some of them genuinely clever about it — right up until Apple added a torch toggle to Control Center and the entire category ceased to exist as a commercial proposition overnight. The developers weren't bad at their jobs. The platform just absorbed the problem they were solving.

On February 25th, Anthropic acquired Vercept, a Seattle-based startup that built computer-use agents — tools that could complete tasks inside applications the way a person with a laptop would. Mouse clicks, form fills, navigation through GUIs, multi-step sequences across applications. The kind of thing that, until recently, required either human operators or expensive custom RPA tooling.

I want to be precise about what this acquisition is and what it isn't. It isn't consolidation — two companies in the same market combining to reduce competition. It isn't a talent grab where the product gets shelved and the engineers get absorbed into a different roadmap. It's something more structurally significant: a foundation model provider pulling in a capability that sits directly on top of what the model can now do natively. The capability layer and the model layer are collapsing into each other. That's vertical integration, and it has a specific shape and a specific set of historical precedents that are worth understanding carefully.

The Flashlight Effect Has a Precise Mechanism

Benedict Evans has been writing about how OpenAI competes and the broader question of platform dynamics in AI. The core tension he's identifying is one that anyone who lived through the mobile platform wars will recognize: when you control the substrate, you have an option — not an obligation, but a genuine option — to absorb any capability that runs on top of it. You exercise that option when the capability has become well-understood enough to standardize, when enough users want it natively, and when absorbing it reduces the leverage of the third-party ecosystem over your users.

The flashlight isn't a random example. Apple didn't absorb it because flashlights are strategically important. They absorbed it because users wanted a flashlight and having to install a third-party app to get one created friction. Reducing that friction was worth more to Apple than whatever rent the App Store was collecting from flashlight apps.

Now apply that logic to computer use. Anthropic already has a Computer Use API. Claude can already click things, navigate interfaces, read the screen. What Vercept built was a layer on top of that capability — the orchestration, the reliability wrappers, the scaffolding that made raw computer use into a usable workflow tool. That layer was commercially valuable precisely because the raw capability was unreliable and hard to orchestrate. Vercept solved the last-mile problem.

But here's the mechanism that matters: once a foundation model provider absorbs a team that solved the last-mile problem, they no longer need to leave that mile to third parties. The scaffold and the model become one thing. The category doesn't die — it gets reclassified from "third-party ecosystem" to "foundation capability." Same outcome for the third-party players as the flashlight developers, just with a longer explanation.

The Cycle Time Has Compressed to Something Alarming

What made the iOS flashlight story feel relatively gentle, in retrospect, is that it played out over years. Developers had time to observe the pattern, pivot, find adjacent opportunities that Apple hadn't yet absorbed. The mobile platform playbook was legible enough that savvy developers learned to read the signals.

The AI equivalent is playing out in months.

SI Inc's piece on the first fully general computer action model makes the underlying dynamic visible: the capability itself is being commoditized at the foundation layer. What Vercept was building custom solutions for — understanding and navigating arbitrary GUIs, completing multi-step tasks across applications — is becoming a solved problem at the model level. The "general" in "general computer action model" is doing a lot of work in that sentence. It means the capability no longer requires bespoke training for specific applications. It means the moat that RPA vendors and computer-use startups were building — deep integrations, application-specific reliability improvements, workflow templates — is being undercut from below.

This is the part that should concern any team evaluating tools in this category. The historical analogies from platform absorption — iOS, Google Maps turn-by-turn navigation eating the standalone GPS market, Stripe Identity collapsing a third-party identity verification category that had barely finished forming — all share a common feature: by the time the absorption happened, the capability was mature enough to standardize. The foundation layer could absorb it because the shape of the problem was understood.

Computer use reached that point faster than most people expected. Vercept's acquisition is the timestamp on that maturity.

Which Categories Just Got Reclassified

Let me be direct about the blast radius, because I think most small teams are underestimating it.

Computer use and GUI automation are the obvious ones — Vercept makes this explicit. Any tool whose core value proposition is "an agent that can click things and complete tasks across applications" is now building in the shadow of what Anthropic is assembling natively. That includes the RPA-adjacent AI players, the browser automation tools that added LLM orchestration as a selling point, and the workflow tools whose differentiator is "works with any application, no API required."

Multi-step browser agents are one reclassification behind, but only one. The capability that makes computer use general — understanding arbitrary visual interfaces, maintaining context across navigation steps, recovering from unexpected states — is the same capability that makes browser agents work. If Anthropic has absorbed the team that solved general computer action, browser agent orchestration isn't far behind on the absorption schedule.

"AI copilot for desktop application X" is a category that deserves particular scrutiny. These tools are essentially computer-use agents with a vertical focus — the AI that helps you work faster in your accounting software, your CRM, your project management tool. The vertical focus is supposed to provide the moat. But if the foundation model can now act inside arbitrary applications natively, the value of the vertical focus compresses to whatever proprietary data or workflow logic the tool has built, not the act of GUI navigation itself.

Orchestration layers for multi-step agent tasks are where I'd focus the most careful thinking. There's a version of this category that will survive: the orchestration of business logic that requires deep domain knowledge, compliance constraints, or genuinely proprietary workflow design. But there's a larger version — the orchestration that exists primarily to compensate for model unreliability, to add retry logic, to sequence steps that the model itself will soon be able to sequence natively — that I've written about as fragility tax accumulation. Adding a layer to manage agent unreliability doesn't resolve the fragility; it moves it up the stack where it's harder to see. When the model becomes reliable enough that the compensating layer is no longer necessary, what's left of the business?

The Acqui-Hire Frame Obscures What's Actually Happening

I want to push back on the way this acquisition will be covered in the next few weeks, because I think the default frame gets the story backwards.

The default frame: Anthropic wanted Vercept's talent. Meta apparently wanted them too — one of Vercept's founders was poached before the acquisition closed. So the story becomes a talent competition between the big labs, with Vercept as the prize. This is a fine story about recruiting dynamics but it completely misses the operational signal.

The operational signal is: Anthropic decided that computer-use capability is now core to the model platform, not peripheral to it. That decision precedes the acquisition — the acquisition is the instrument of the decision, not the decision itself. A company builds a computer-use API, watches a third-party ecosystem emerge to fill the gaps in that API, then acquires the most capable filler. The sequence reveals the intent. The intent is vertical integration.

This matters for the talent frame because it tells you something about what gets absorbed. Vercept's engineers aren't being hired to work on something different. They're being hired to collapse the distance between the computer-use capability Anthropic already has and the production-ready workflow tool that Vercept had built on top of it. The product is becoming the platform feature. That's not acqui-hire. That's absorption.

What Small Teams Should Actually Do With This

I've been writing about AI stack auditing as a four-axis problem. The Vercept acquisition adds urgency to the first axis: commoditisation risk. The question isn't "is this tool good?" — it's "is this tool solving a problem that the foundation model provider has a structural incentive to solve natively?"

For computer use specifically, the answer is now clearly yes, and the evidence isn't speculative. The absorption has begun.

This doesn't mean abandoning every tool in the blast radius immediately. Some vertical computer-use tools will survive on the strength of their domain integrations, their compliance posture, or their operational reliability in specific environments. A tool that has spent two years building reliable integrations with the specific ERP your team runs is not the same as a generic computer-use wrapper. The question is whether the value is in the act of computer use — which is being absorbed — or in the domain-specific logic built around it, which isn't.

The practical audit looks like this: for each AI tool your team is running, identify what problem it's actually solving. Then ask whether that problem is generic enough that a foundation model provider would benefit from solving it natively. If the answer is yes, give the tool a shorter planning horizon and start identifying what the foundation-native alternative looks like. Not because you need to migrate tomorrow, but because "we didn't see it coming" is an operational failure, not a surprise.

The harder version of this question: if you're building something in this space, what does your value proposition look like after Anthropic ships computer use as a native, well-documented, production-ready model capability? If the honest answer is "we become a template library and a support contract," you're looking at a business model, not a moat.

The Flashlight Was Always Going to Be Built In

What I find genuinely interesting about the Vercept acquisition — as opposed to merely strategically concerning — is what it reveals about the pace of capability maturation in AI. The flashlight took years to get absorbed because the capability itself was simple. Computer use is not simple. The fact that it's being absorbed at the foundation layer already, while it's still an active area of research, suggests that the window between "third parties solve the last-mile problem" and "foundation provider absorbs the last-mile solution" is shrinking to something that can't support a normal venture investment timeline.

That's a structural change in how this technology layer works, not a one-off event. Ben Evans' framing of how foundation model providers compete suggests we're watching a platform dynamic that has precedents but not exact analogues — the cycle time is too fast, the capability surface is too broad, and the absorbing entities have both the model and the distribution.

The flashlight was always going to be built in. The question for teams operating in this environment is which flashlights they're currently paying for — and whether they've noticed that the platform already knows how to turn the torch on.


Vercept's acquisition is the clearest signal I've read this year that the AI capability stack is consolidating faster than most tool roadmaps account for. If you're running an audit of your team's AI tools through a commoditisation lens, I'd start with any tool whose primary value is acting on your behalf inside other applications. That category just moved categories.