By Amy Thomson, Bloomberg
The Pentagon is close to cutting ties with Anthropic and may label the artificial intelligence company a supply chain risk after becoming frustrated with restrictions on how it can use the technology, Axios reported.
The breakdown follows months of contentious negotiations about how the military can use the Claude tool, Axios said, citing a source familiar with the talks who it didn’t identify. In particular, Anthropic wants to make sure its AI isn’t used to spy on citizens on a large scale or to develop weapons that can be deployed without a human involved, the article said. The government wants to be allowed to use Claude for “all lawful purposes,” it said.
If the AI company is deemed a supply chain risk, any company that wants to do business with the military will have to cut ties with Anthropic, Axios said, citing a senior Pentagon official. Pentagon spokesman Sean Parnell told Axios that the relationship was being reviewed. A spokesperson for Anthropic told Axios it was having “productive conversations, in good faith” with the Department of War and said the company is committed to using AI for national security.
A representative for Anthropic did not immediately respond to a Bloomberg request for comment.
Anthropic won a two-year agreement with the US Defense Department last year that involved a prototype of AI’s Claude Gov models and Claude for Enterprise. The Anthropic negotiations may set the tone for talks with OpenAI, Google and xAI, which aren’t yet used for classified work, Axios said.
Anthropic, founded by former OpenAI researchers, positions itself as a more responsible AI company that aims to avoid any catastrophic harms from the advanced technology.
More stories like this are available on bloomberg.com
©2026 Bloomberg L.P.