Pentagon vs Anthropic: Cut-off Threat

A.I
Pentagon vs Anthropic: Cut-off Threat
The Pentagon threatens cut off Anthropic over limits on military uses of its AI, pressing companies to allow tools on classified networks. The standoff highlights tensions between defence demands, corporate safety guardrails and national security risks.

Pentagon threatens cut off as talks with Anthropic hit an impasse

This week the pentagon threatens cut off Anthropic, according to reporting that surfaced on 16 February 2026, as the U.S. Department of Defense presses the company to loosen the limits it places on how its large language models may be used by the military. The dispute—reported by CNBC alongside earlier Reuters and Axios coverage—centres on whether commercial AI firms will permit their systems to be used for what the Pentagon describes as "all lawful purposes," including weapons development, intelligence collection and battlefield operations. Anthropic says it has stood its ground on specific safety constraints such as hard limits on fully autonomous weapons and mass domestic surveillance, and the disagreement has escalated to threats that the relationship could be cut off.

pentagon threatens cut off: the legal and contractual friction points

The heart of the clash is legal and contractual language. The Pentagon wants access to models on classified networks and broader wording that would allow use across a range of military missions; companies have typically provided access subject to model-level safeguards and policy-based restrictions. For the Pentagon, the phrase "all lawful purposes" is a single contractual term that would reduce ambiguity about permitted military use. For companies such as Anthropic, OpenAI, Google and xAI, accepting that wording risks eroding internal safety guardrails—both technical controls built into the models and corporate policy commitments intended to limit misuse.

Negotiations also touch technical details: how models are deployed on classified networks, whether content filters and usage logging remain in place, and who is responsible if an AI-driven system produces harmful outputs. The Pentagon has signalled frustration after months of talks; the push to remove or loosen restrictions reflects an operational desire to use the most capable systems available in contested environments, while firms worry about liability, reputational risk and the broader ethics of military applications.

How Anthropic and the other vendors have responded

Anthropic has publicly said that early conversations with the U.S. government focused on usage-policy questions and explicit boundaries—most notably bans on fully autonomous weapons and limits on mass domestic surveillance—rather than greenlighting specific operations. A company spokesperson told reporters it had not discussed the use of its Claude model for named, tactical operations with the Pentagon. That statement sits alongside reporting that Claude was used in at least one high-profile operation via a partner: the Wall Street Journal reported Claude was deployed through Palantir in an operation to capture Venezuela's former president, demonstrating how partner integrations complicate simple characterisations of corporate restraint.

Other major AI firms named in reporting—OpenAI, Google and xAI—are also the subject of Pentagon pressure to expand access. Reuters and CNBC described broad DoD requests to make models available on classified networks without many of the standard restrictions companies apply to civilian users. Firms have handled those requests differently, balancing contract opportunities and security clearances against internal policies and external scrutiny.

Why the Pentagon is pushing and what it wants to achieve

But pushing for blanket, open-ended contractual language—"all lawful purposes"—raises questions about what safeguards remain in place when models are used in kinetic or covert contexts. Lawful use is a broad legal standard that still requires implementing controls to avoid unintended escalation, civilian harm, or breaches of domestic and international law. Those operational demands are the source of the current tension.

Technical safeguards at stake and the Pentagon's concerns

Companies build safeguards into models along two axes: model-level controls (fine-tuning, safety layers, red-team testing and content filters) and policy-level controls (terms of service and contractual usage restrictions). The Pentagon's push to reduce or bypass restrictions would change how those controls operate in practice. For example, placing a model on a classified network may remove some telemetry and logging for security reasons, but it can also undercut the ability of the company to monitor misuse and patch problematic behaviour.

Reportedly, the Pentagon is frustrated that Anthropic has resisted proposals to relax limits on scenarios the company deems high risk—such as fully autonomous weapon control and mass domestic surveillance. Those are areas where many AI firms, not just Anthropic, have defended strict prohibitions on autonomous decision-making that could cause lethal outcomes without human oversight.

How cutting access could ripple through safety and security

If the Pentagon follows through and cuts ties with Anthropic, the immediate effect would be a narrower set of vendors available to the U.S. defence community. That would likely accelerate internal DoD efforts to build or contract bespoke models, increase dependence on firms that accede to broader usage terms, or encourage partnerships with foreign suppliers—each outcome carrying its own security and oversight trade-offs. A cut-off could also push agencies to accept solutions with fewer vendor-side safety controls, increasing the risk of misuse or unintended consequences.

Conversely, companies that maintain robust safety restrictions argue that these guardrails reduce downstream risks, prevent mission creep, and preserve public trust. The standoff shows a deeper strategic choice: whether national security actors tolerate corporate guardrails that limit certain military applications, or whether they insist on unfettered access and accept new oversight burdens within government systems.

Past precedents and the wider policy context

The DoD has previously funded AI research and formed partnerships with industry, but the current moment stands out because of the scale and maturity of commercially available large language models. Earlier collaborations included conditional grants and pilot projects that required specific safeguards; the present dispute is about broad operational terms rather than pilot funding. The Pentagon's threat to cut off a vendor over policy disagreements is an escalation in public view and signals tough bargaining ahead for all major suppliers of advanced AI tools.

There are also political pressures at play. Public figures and competitors have weighed in—most notably Elon Musk, who recently attacked Anthropic’s models in public comments—underscoring that debates about bias, safety and the proper role of AI in military contexts are now part of a noisy public conversation, not just closed procurement disputes.

Negotiations and next moves

Both sides have incentives to avoid a clean break. The Pentagon needs cutting-edge capabilities; firms want to retain government business and avoid being boxed out of significant contracts. Expect intensive, behind-the-scenes negotiations over contract language, technical isolation measures (for example, air-gapped or offset deployments), enhanced audit and oversight procedures, and possibly new legal constructs that clarify liability and acceptable use in classified contexts.

For the wider public and policymakers, this dispute raises familiar questions about export controls, oversight of dual-use technologies, and whether new governance models are required for AI used in national security. Whatever the immediate outcome, the negotiation will set precedents for how governments and private AI developers share sensitive capabilities while trying to keep safety guardrails intact.

Sources

  • U.S. Department of Defense (Pentagon) public statements and procurement materials
  • Anthropic press materials and company statements
  • Palantir Technologies public disclosures around partner deployments
Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany

Readers

Readers Questions Answered

Q What is Anthropic and why is the Pentagon threatening to cut them off over AI safeguards?
A Anthropic is a San Francisco-based AI company known for its Claude AI model, which has been used in military operations like a raid in Venezuela. The Pentagon, under Defense Secretary Pete Hegseth, is threatening to cut ties and label Anthropic a 'supply chain risk' due to the company's refusal to allow unrestricted military use of Claude for all lawful purposes, insisting instead on safeguards against mass surveillance on Americans and autonomous weapons. This dispute has escalated despite months of negotiations, with the Pentagon viewing Anthropic's restrictions as excessively limiting.
Q What AI safeguards did the Pentagon say Anthropic failed to meet?
A The Pentagon stated that Anthropic failed to meet requirements for unrestricted access to Claude AI for all lawful military uses, particularly in classified systems. Anthropic seeks contract terms prohibiting mass surveillance on Americans and development of weapons that operate without human involvement, which the Pentagon considers impractical due to gray areas. The Pentagon is also pressing similar unrestricted access from OpenAI, Google, and xAI.
Q Has the Pentagon taken similar actions against other AI companies?
A The Pentagon has not yet taken similar actions like labeling other AI companies as supply chain risks, but it is engaged in ongoing tough negotiations with OpenAI, Google, and xAI over the same unrestricted access issues. These companies have consented to relax restrictions for military use in classified systems, though not fully implemented for sensitive operations. No reports indicate actual cut-offs or designations for these firms as of now.
Q How could cutting off Anthropic affect AI safety and national security initiatives?
A Cutting off Anthropic could hinder national security by disrupting access to Claude, which leads in specialized military applications, while rival models are only 'just behind,' potentially delaying critical operations. It might pressure AI firms to weaken safety safeguards, risking amplified privacy violations through mass surveillance or autonomous weapons development. Conversely, it sets a precedent for prioritizing military needs over corporate AI restrictions, possibly advancing defense AI initiatives but at the cost of ethical safeguards.
Q What did CNBC report about the dispute between the Pentagon and Anthropic?
A CNBC is not directly referenced in available reports on the dispute; coverage comes primarily from Axios and Hindustan Times. Axios detailed Defense Secretary Pete Hegseth's near-decision to sever ties with Anthropic over Claude's usage restrictions, quoting Pentagon officials on labeling it a supply chain risk and frustration after months of talks. Hindustan Times echoed this, highlighting Anthropic's safeguards against surveillance and autonomous weapons.

Have a question about this article?

Questions are reviewed before publishing. We'll answer the best ones!

Comments

No comments yet. Be the first!