Something genuinely clarifying happened recently, and I want to make sure small teams don't miss it.

Google restricted accounts belonging to AI Pro and AI Ultra subscribers who were accessing Gemini through OpenClaw, a third-party client that uses OAuth to connect users to various AI services. These weren't free-tier users looking for a workaround. These were paying customers — people handing over real money every month for access to Google AI's more capable tiers. And one day, without advance warning, their accounts were restricted because of how they chose to access what they were paying for.

Read that again, because the operational implication is important: the restriction wasn't about what they were doing with the model. It was about the client they used to reach it.

This is platform dependency. Not as an abstract concept in a risk framework document. As a thing that happened to real people this month.

The System Is Working Exactly As Designed

Here's the uncomfortable framing: Google didn't make a mistake here. They didn't have a rogue enforcement system. From a purely structural standpoint, what happened is the logical expression of how platform relationships work.

When you subscribe to a foundation model provider, you're not buying a unit of compute that you then use however you like. You're buying access under their terms, and those terms include which pathways they choose to sanction. The distinction sounds lawyerly until it isn't — until you wake up to find your workflow broken because you were using a client the platform didn't approve.

Think of it like this: imagine you rent a co-working space and they decide, mid-lease, that only their own brand of monitors can be plugged into the deskpower outlets. You're still a paying tenant. The electricity is still flowing. But the tool you brought to use it is now in violation. Frustrating? Absolutely. Breach of contract? Probably depends on the fine print. Surprising, given how platform economics work? It really shouldn't be.

What makes the Google/OpenClaw situation particularly instructive is the combination of factors: paying customers, no advance notice, a restriction based on access method rather than usage behavior, and a third-party tool that users had actively integrated into their workflows. Every one of those factors is a risk vector that small teams routinely ignore when they're in the happy-path phase of adopting a new AI tool.

The question I'd be asking isn't "is this fair?" It's "which of my current workflows has the same exposure?"

What This Means for How You Build Your Stack

The dependency risk here has two layers, and most post-mortems on situations like this only talk about one of them.

Layer one is the obvious one: you built a workflow on top of a third-party client, and the platform decided to restrict it. Your workflow breaks. This is the story as usually told — vendor lock-in, platform risk, diversify your stack, etc. Good advice, worth repeating, but not the full picture.

Layer two is subtler and more interesting: the platform has a structural incentive to prefer its own clients over third-party ones. Not because they're malicious, but because third-party clients reduce platform stickiness, complicate data relationships, and — crucially — make it harder for the platform to understand and monetize usage patterns. Every time a user accesses Gemini through OpenClaw instead of Google's own interface, Google gets less signal about how the product is being used. From a product strategy standpoint, that's a real cost.

This connects to something I've been reading about lately in the LLM wrapper space: foundation model providers have a systematic incentive to absorb value from the intermediary layer. The OpenClaw restriction is a softer version of that same pressure — not absorption, but exclusion. Different mechanism, same underlying incentive.

For small teams, this means the risk assessment for any AI tool in your stack needs to include a second question alongside "does this work well?" The first question is: is this tool's access to the underlying model sanctioned by the platform, or is it operating in a grey zone? Because grey zones have a way of becoming red zones, and usually not on your timeline.

Practical things worth doing right now:

Audit your access methods. If you're using any third-party client to access a foundation model — whether it's for Gemini, Claude, GPT-4, or anything else — check whether that access method is explicitly supported in the platform's terms. "It works" and "it's sanctioned" are not the same thing.

Map your single points of failure. Which workflows in your operation break if a specific AI client becomes unavailable tomorrow? If the answer is "several important ones," you have concentration risk that isn't showing up on any dashboard.

Treat sanctioned API access as the baseline, not the fallback. If you need programmatic or multi-tool access to a model, use the official API with proper keys and rate limits. It's more friction to set up. It's also the access pathway the platform has the most incentive to keep stable, because it generates direct revenue on transparent terms.

Watch what happens next with OpenClaw. Whether Google clarifies policy, reverses the restrictions, or doubles down is itself useful signal about how they intend to manage third-party access going forward. Platform behavior in ambiguous situations is more predictive than their stated policy.

The Practical Takeaway

The OpenClaw situation won't be the last one like it. As foundation model providers mature, they will increasingly have both the incentive and the technical capability to shape how users access their products. That's not cynicism — it's just how platform dynamics work, from app stores to cloud APIs.

Small teams that plan ahead for this aren't being paranoid. They're being accurate about the nature of the relationships they're entering. The question isn't whether you trust Google, or Anthropic, or OpenAI. The question is whether you've built workflows that assume a level of access stability those relationships don't actually guarantee.

Platform dependency isn't a risk that announces itself. It shows up one morning as a restricted account, a broken integration, or a deprecation notice with a ninety-day runway. The teams that feel it least are the ones who assumed the worst-case scenario before they needed to.