Pentagon threatens to end Anthropic work in feud over AI terms
Published in Political News
The Pentagon warned Anthropic PBC that it would terminate the company’s military contracts on Friday if the artificial intelligence startup failed to meet government terms for use of its technology, according to people familiar with the matter.
During a meeting Tuesday between Chief Executive Officer Dario Amodei and Defense Secretary Pete Hegseth, U.S. officials threatened to declare Anthropic a supply-chain risk or invoke the Defense Production Act to use the AI software even if the company didn’t comply, the people said.
The ultimatum marks an escalation in a growing dispute between the Defense Department and the AI startup over the company’s insistence on guardrails for use of its Claude AI tool. If carried out, the Pentagon’s threat would put at risk up to $200 million in work that Anthropic had agreed to do for the military.
In the meeting, according to one of the people, Amodei laid out Anthropic’s conditions: that the U.S. military refrain from using its products to autonomously target enemy combatants or conduct mass surveillance of U.S. citizens. The person said Amodei emphasized that these scenarios have yet to arise during operations in the field.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” Anthropic said in a statement following the meeting.
The people who described the discussions did so on condition of anonymity owing to their confidential nature. Axios reported earlier on the meeting’s outcome.
Now valued at roughly $380 billion based on its latest funding round, Anthropic was the first AI company granted clearance to handle classified material within the U.S. government, and its Claude Gov tool quickly became a preferred option among officials at the Pentagon who appreciate its ease of use. It faces growing competition in the national security space from rivals OpenAI, Google’s DeepMind and Elon Musk’s xAI.
The Pentagon had grown concerned Anthropic did not support U.S. goals after hearing the company had questions about how its AI was used during the special forces operation in early January that captured Venezuelan President Nicolas Maduro, a U.S. official said. Anthropic offered a different interpretation of the Pentagon’s claim the company had questions about the Maduro raid.
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” the company said on Monday, via a spokesperson, referring to the Trump administration’s preferred name for the Defense Department. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
Anthropic positions itself as a company focused on the responsible use of AI with a goal of avoiding catastrophic harms from the technology. It built Claude Gov specifically for U.S. national security purposes and aims to serve government customers within its own ethical bounds.
The feud erupted just weeks after the Pentagon published a new strategy on artificial intelligence that called for making the military an “AI-first” force by increasing experimentation with frontier models and reducing bureaucratic barriers to use. The approach specifically urged the Defense Department to choose models that are “free from usage policy constraints that may limit lawful military applications.”
©2026 Bloomberg L.P. Visit bloomberg.com. Distributed by Tribune Content Agency, LLC.






















































Comments