There's a pattern from platform history that keeps reasserting itself, and I've been watching it set up again in AI coding tools.
In the early iOS era, there were dozens of flashlight apps on the App Store — each with slightly different UI, brightness controls, strobe modes, five-star reviews. Then Apple built a flashlight into the Control Center. The apps didn't disappear immediately, but the category effectively closed. The fragmentation that looked like a healthy ecosystem was actually a demand signal Apple had been monitoring. When the signal was clear enough, they absorbed it.
Right now, the GitHub trending list for AI coding assistants looks like the App Store flashlight category circa 2011. Cursor, Windsurf, Cline, Continue, Aider, Devin — plus a dozen others trending in the same week. And simultaneously, Perplexity is pushing into computer use, which means the foundational capability layer is actively expanding toward the same territory these tools occupy.
Fifteen tools trending at once is not evidence of a healthy, expanding market. It's evidence of three things happening simultaneously: a capability just crossed a viability threshold, nobody has won on differentiation yet, and the foundation model providers just received a very loud demand signal.
What Peak Fragmentation Actually Signals
The simultaneous trending isn't accidental. It reflects a specific moment in capability maturity: the underlying models got good enough that a competent team could ship something useful in weeks, so many competent teams did. The result is a crowded room where most of the tools are — and I say this with genuine affection for the builders — essentially wrapping the same model capabilities with different UX decisions on top.
That's the absorption candidate profile: a tool whose core value proposition is "we give you access to [foundation model capability] in a nicer interface." When the foundation model provider ships that capability natively — or acquires the team that proved it works — the wrapper evaporates. Not because the product was bad. Because it was never structurally distinct from what the foundation layer was going to do anyway.
I've been tracking the Vercept acquisition as a case study here: a computer use company, absorbed, and suddenly every computer use wrapper is inside the blast radius. Perplexity Computer is interesting precisely because Perplexity is positioning itself as a foundation layer player, not a wrapper. That changes the calculus — they're not an absorption candidate, they're trying to be one of the absorbers.
The pattern from Google Maps is instructive: turn-by-turn navigation apps thrived until routing became a core OS capability. The apps that survived were the ones that had built something the OS layer couldn't easily replicate — specific data integrations, fleet management hooks, use-case specificity. The rest got compressed.
The Survivor Characteristics
From what I can tell reading across the category, durability correlates with a specific set of properties. Not all of them — you probably only need two or three to stay outside the blast radius.
Meaningful workflow integration, not just model access. Tools that are embedded deeply enough in a team's actual development loop that switching costs are real. Cursor has this — it's not just "AI in your editor," it's your editor, with months of configuration, rules, and context that don't port cleanly to the next thing. That's a moat. A thin API wrapper over GPT-4o is not.
Architectural separation at a real boundary. The most interesting thing I've been reading about Claude Code's approach is the planning/execution split — not as a reliability workaround, but as a genuine architectural boundary between two different cognitive modes. That's durable because it reflects something real about how complex coding tasks work, not just a layer added to manage agent anxiety. Tools built on real architectural insight tend to outlast tools built on the current capability ceiling.
A data or feedback flywheel. Tools that get meaningfully better as more teams use them — not through model fine-tuning necessarily, but through accumulated patterns, shared context, or community-contributed capabilities. Aider's approach to working with existing codebases through git history is in this territory: it's not just calling an API, it's building on information the model doesn't have by default.
Opinionated scope. The tools I'm least worried about in a consolidation scenario are the ones with strong opinions about what they don't do. An agent that does one thing very well and has a clear boundary around it is harder to absorb than a general-purpose assistant trying to do everything. When foundation models ship coding assistance natively, the general-purpose tools get squeezed hardest.
The absorption candidates, by contrast, are tools where you can look at the feature list and say: "this is a thoughtful interface for a capability the foundation model already has." That's a valid product. It's not a durable one.
What Small Teams Should Do Right Now
The stack decision isn't "pick the best tool today." It's "pick tools with survivor characteristics before consolidation happens."
That means auditing your current coding tools against a short checklist. Not feature lists — durability indicators. Does this tool have deep workflow integration that creates genuine switching costs? Is the core value proposition something the foundation model layer can't absorb without significant engineering work? Does the tool have an architectural angle that reflects real insight, not just a better prompt?
If the answer to all three is "not really, it's mostly a good wrapper" — that's not a reason to abandon it immediately, but it is a reason to hold it loosely and not build deep dependencies on it. Treat it as current-generation infrastructure, not long-term investment.
The practical play for small teams: anchor your stack on tools with genuine workflow integration (even if they're slower to adopt or harder to configure), and use the wrapper tools for acceleration without entrenching them. The wrappers are often excellent right now. They're just running on borrowed time.
The crowded room clears fast when the foundation layer moves. The question worth asking before it happens: if the model provider ships this natively tomorrow, what do I actually lose?
If the answer is "not much," you already know where that tool sits on the fragmentation curve.