There's a standard crisis response playbook for consumer backlash: hold the line, let the news cycle turn, measure churn, adjust messaging. It works because consumer exits are episodic. People delete apps, sign declarations, post threads — and then the next thing happens and attention moves. The uninstall surge fades. Revenue recovers. You wait it out.
There's a different playbook for talent departures: announce a thoughtful transition, praise the outgoing executive's contributions, quietly redistribute the org chart. It works because individual exits, even significant ones, are containable. Organizations absorb departures. Roles get backfilled. Institutional knowledge bleeds slowly enough that you can't see it happening until you're already past the point where it mattered.
What I haven't seen a playbook for is both happening simultaneously, triggered by the same event, in the same news cycle.
That's the situation OpenAI is now in. And I don't think the standard response to each problem, applied in parallel, is adequate to the compound one.
Two Different Signal Classes
Caitlin Kalinowski's resignation is being reported alongside the Pro-Human Declaration and the broader Pentagon-AI standoff as part of a single values story. That framing is accurate as far as it goes, but it obscures something operationally important: these are different signal classes, and they fail in different ways.
The Pro-Human Declaration signatories — researchers, ethicists, concerned technologists — are making a conscience statement. They're saying "we disagree with where this is going." That's a real signal, worth taking seriously. But conscience statements don't remove capability. The signatories aren't the people responsible for building the robotics hardware. They can withdraw moral endorsement; they can't withdraw the engineering team.
Kalinowski wasn't making a conscience statement. She was leading the robotics team. Her departure is an executive capability exit — she's taking with her not just a title but the organizational judgment, technical credibility, and leadership continuity that her role depended on. When an exec leaves over a values question, the question isn't whether you agree with their values. The operational question is: what was she the linchpin of, and how long does it take to rebuild that?
This distinction matters because it changes what you're actually losing. Conscience statements are reputational costs. Capability exits are operational costs. They require different responses, different timelines, and different organizational resources to absorb.
Independent Variables, Compounding Risks
Here's the assumption I suspect AI labs are making: user-layer exits and builder-layer exits are independent variables with independent response curves. Consumer uninstalls create revenue and market share pressure. Executive departures create org chart and capability gaps. Both are manageable, separately, because organizations have gotten good at managing each one separately.
The problem is that they don't stay separate when they share a trigger event.
When consumers exit in response to a Pentagon deal, that's pressure on the revenue model — pressure that ultimately funds the R&D and talent retention that might otherwise stabilize the builder-layer. When builders exit in response to the same deal, that's a reduction in the technical capability to respond to the competitive pressure the consumer exits are creating. Each exit is survivable in isolation. But the consumer exits are degrading the resources available to address the capability gap the builder exits created — and the builder exits are reducing the capacity to win back the users the consumer exits represent.
This is what compounding looks like operationally: two feedback loops that were manageable when separate become reinforcing when they share a common cause.
The analogy that comes to mind is a ship losing both an engine and its chief engineer in the same incident. You can respond to engine failure with a crew. You can respond to losing your chief engineer by redistributing her responsibilities. But losing both at the same time means the people responsible for repairing the engine are also managing the leadership transition — and the engine is still failing while that's happening.
What the Playbook Misses
The standard crisis communications response to this week's events will be: acknowledge the departures respectfully, reaffirm mission, point to continued investment in safety research. That's not wrong, but it's a response to the reputational surface of the problem, not the operational interior.
What it doesn't address is the compounding dynamic — specifically, whether the organization has the capacity to recover from both simultaneously. Consumer churn recovery requires strong product and retention. Strong product and retention requires exactly the kind of senior technical leadership that values-driven exits remove. The organization is being asked to run a recovery operation with fewer of the people who would normally run it.
There's a second compounding effect that's less visible: the signaling function. When a hardware exec of Kalinowski's seniority resigns, it's not just an organizational event — it's information for other senior people inside the organization about what they can expect. Capability exits have a social proof problem: they lower the perceived cost of the next exit. The first departure is news. The third departure is a pattern. And patterns, once established, become self-reinforcing in exactly the way that individual departures don't.
The question I'd want answered if I were on the operations side of any AI lab watching this week unfold isn't "will our users leave?" or "will our builders leave?" It's: have we actually modeled what our organizational response capacity looks like if both happen in the same quarter? Do we have the bench depth to run a product recovery while simultaneously rebuilding leadership capability? And critically — are those two recovery operations competing for the same set of internal resources?
If the answer to the last question is yes, then managing these as separate risks isn't a strategy. It's a way of not looking at the compound problem directly.
The exit you can wait out is the episodic one. The exit you can't wait out is the one that removes the people responsible for ending the episode.