
Anthropic has said it will not agree to US Department of Defense demands that would allow its artificial intelligence tools to be used for any lawful purpose, stating it would rather exit Pentagon contracts than permit applications it believes could undermine democratic values. The dispute centers on potential use of Anthropic’s Claude model for mass domestic surveillance and fully autonomous weapons.
Dispute Over Contract Language
Chief executive Dario Amodei said on Thursday that his company could not accept revised contract wording that, in its view, failed to prevent Claude from being used for mass surveillance of Americans or in fully autonomous weapons systems. His comments followed a meeting two days earlier with US Secretary of Defense Pete Hegseth, during which the Defense Department pressed Anthropic to allow use of its models for all lawful purposes. The meeting ended with a threat to remove Anthropic from the Defense Department’s supply chain.
Amodei said the use cases at issue had never been included in the company’s contracts with what he referred to as the Department of War, a secondary name for the Defense Department under an executive order signed by President Donald Trump in September. He said the company believes those use cases should not be added now. He added that if the department chooses to offboard Anthropic, the company will work to ensure a smooth transition to another provider.
An Anthropic spokeswoman said updated contract language received on Wednesday night represented virtually no progress on limiting Claude’s use for mass surveillance or fully autonomous weapons. She said language presented as a compromise included provisions that could allow safeguards to be disregarded. She said these issues have been central to negotiations for months.
A Defense Department representative could not be reached for comment. Before Amodei’s remarks, Pentagon spokesman Sean Parnell wrote on X that claims the department wanted to use Anthropic for mass surveillance or fully autonomous weapons were false. He said the Pentagon’s request was to allow use of the model for all lawful purposes.
Potential Government Measures
A Pentagon official previously told the BBC that if Anthropic did not comply, Hegseth would ensure the Defense Production Act was invoked. The act allows the US president to require companies to meet defense needs if their products are deemed critical. Hegseth also threatened to designate Anthropic as a supply chain risk, which would effectively bar it from government use.
A former Defense Department official who requested anonymity told the BBC that the grounds for invoking such measures were extremely flimsy. A person familiar with the negotiations said tensions between Anthropic and the Pentagon had been building for several months, including before it became public that Claude was used as part of a US operation to seize Venezuelan President Nicolás Maduro.
Anthropic’s Position On AI Use
In a company blog post, Amodei said AI systems can assemble scattered data into detailed profiles of individuals at large scale, raising concerns about mass domestic surveillance. He said Anthropic supports lawful foreign intelligence and counterintelligence missions but views domestic mass surveillance as incompatible with democratic values.
Regarding weapons, Amodei said current AI systems are not reliable enough to power fully autonomous weapons. He said the company would not knowingly provide products that put warfighters or civilians at risk and that autonomous systems lack the oversight and judgment exercised by trained personnel. He added that Anthropic had offered to work with the Defense Department on research and development to improve reliability, but that the offer had not been accepted.
Featured image credits: Getty Images for TechCrunch
For more stories like it, click the +Follow button at the top of this page to follow us.
