Margin of Safety #48: Anthropic–Department of War and Implication for AI Dependency Risk
Jimmy Park, Kathryn Shih
March 18, 2026
- Blog Post
AI presents just another vendor dependency risk
The Trigger
Earlier this month, the Department of War designated Anthropic [1] a “supply chain risk” and began phasing out its models from military use. Rather than a technical problem, this decision stemmed from a disagreement over acceptable use cases. Anthropic declined to modify certain safeguards, and the government responded by restricting access and transitioning to alternative providers. This episode illustrates a longstanding issue in cyber resiliency: when a critical process has a single-sourced external provider, an organization’s control is inherently constrained.
A Familiar Pattern
Organizations traditionally avoid single points of failure; senior executives fly on separate planes, critical systems are distributed across regions, and supply chains are diversified. As the importance of a capability increases, tolerance for dependency on a single provider declines.
Cloud and SaaS introduced new points of external dependencies, trading control for efficiency while introducing reliance on external vendors. But this reliance leaves customers downstream dependent on vendor execution[2], strategy, pricing and investment levels. Prices can change (and as AWS has recently shown, not always for the better) and strategic priorities shift. In general, the further a usage pattern is from the vendor’s ideal customer profile, the more risk exists that the upstream dependency will languish. In extreme cases, vendor services can graduate from underinvestment to full on deprecation, forcing emergency customer migrations. Some large tech companies have done this enough to be a meme.
AI adds a new dimension: it often functions as a deeply integrated infrastructure service that cannot be quickly decoupled, and unliked existing cloud services, swapping AI vendors is full of “unknown unknowns” – nobody’s quite sure exactly where competing models under or overperform versus each other, and many customers lack the broad valuation datasets that would allow them to quantify such gaps.
AI as a Continuation, Not a Departure
The dispute between Anthropic and the Department of War highlights these new and existing risks. That the disruption arose from policy divergence rather than technical failure simply shows that external dependencies are as impacted by contractual and regulatory pressures as they are by technical delivery or outages. The complexity that AI adds is operational – how does the department validate OpenAI’s performance versus what it was seeing from Claude? – but the core issues ring familiar.
Differentiating Levels of Dependency
Not all workloads – including AI ones – require the same level of resilience. A tiered approach is more practical.
On one end, experimental use cases can tolerate reliance on a single model provider. In these contexts, speed and iteration are more important than redundancy.
For workflows that begin to influence business processes, it becomes necessary to maintain some degree of flexibility. Benchmarking across models (or other infrastructure components) and avoiding tightly coupled, vendor-specific implementations can reduce the cost of switching. The less confidence a consumer has in a vendor’s long term stability and support commitment, the higher the return on such benchmarks.
For mission-critical applications, the requirements become serious. In these cases, organizations benefit from validated fallback options, multi-model evaluation, and clear governance over vendor dependence. The distinction between theoretical and tested portability becomes meaningful in practice, with disaster recovery and failover drills as a best practice.
For certain high-control environments, open-weight models may serve as part of a contingency strategy. They provide greater control over deployment and reduce exposure to vendor-specific policy changes. While they may not match the performance of leading proprietary models, they can offer continuity when external dependencies become constrained.
In practice, many organizations are likely to adopt a combination of approaches, using frontier models where performance is required and alternative options where resilience is prioritized.
A Practical Question
As AI becomes more integrated into business operations, the relevant question is not simply which model performs best. It is how the organization would respond if access to that model were disrupted.
The Anthropic–Department of War dispute demonstrates that such disruptions can arise from policy disagreements, regulatory decisions, or contractual conflicts. These are not new risks, but they may have more immediate consequences when they affect systems that sit closer to core decision-making processes. Organizations that can answer this question with clarity are likely to have a better understanding of their dependency exposure.
Appendix: When Access Becomes Conditional
A related question is how much control model providers should exert over downstream use. The Anthropic case is illustrative. The company declined to support certain targeted military applications, in part because its models were not designed or validated for those use cases.
Kathryn’s view is that such restrictions are appropriate. If a model has not been trained or evaluated for a high-consequence domain, refusal can be seen as a form of risk management. In this case, training and testing the model for military specific use cases would be expensive in both capital and scarce human expertise. And there are also organizational considerations – if the technical team knows that its work product is being deployed in life-or-death scenarios where it’s unlikely to perform well, it can dramatically erode morale. When people are working long hours to push the boundaries of technical capability, those factors matter.
However, Jimmy’s view is that, provided limitations are clearly disclosed and performance boundaries are well understood, responsibility should rest primarily with the user. (Yes we do disagree sometimes!) In this framing, models are tools, and the decision to deploy them—whether in military or other contexts—sits with the operator rather than the provider. The distinction has practical implications. If providers retain discretion over acceptable use, then access itself becomes a variable in system design, not a given. For organizations relying on external models, that reinforces a broader point: dependency risk is shaped not only by technology and pricing, but also by the governance choices of the vendor.
If you’re building something in this space, feel free to reach out to jpark@forgepointcap.com and kshih@forgepointcap.com.
This blog is also published on Margin of Safety, Jimmy and Kathryn’s Substack, as they research the practical sides of security + AI so you don’t have to.
[1] We’re assuming any reader know who Anthropic is! If not, they make Claude, a top performing LLM and notably strong coding agent.
[2] As a very practical example, we know multiple AI customers who have struggled with low uptime on some frontier model endpoints.