US Secretary of Defense Pete Hegseth has made Anthropic an offer it may not be able to refuse. The Defense Department and the AI firm held a meeting at the Pentagon on Tuesday, where the government tried to compel the house of Claude to lift some restrictions on military use of its tech. However, recent changes to the company's safety policy suggest it may be willing to be more flexible than it's letting on.
The Pentagon's unhappiness with Anthropic has been in the news since the end of last month, when Reuters reported that the two were clashing over safeguards that would prevent the DoD from using Anthropic's AI to autonomously target weapons without human intervention and to conduct domestic surveillance within the United States.
The Register has confirmed with individuals on both sides of the discussion that a meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth on Tuesday has done little to change Anthropic's mind on the matter, with the Pentagon now trotting out threats to get what it wants.
A senior Pentagon official told us that, if Anthropic refuses to let the Defense Department do what it wants with its AI by the end of the day on Friday, it may compel the company to do what it wants through the Defense Production Act.
The DPA gives the President and any executive branch officials to whom he delegates such authority, like the Defense Secretary, broad authority to require businesses to accept contracts deemed necessary to promote the national defense. That authority, the official told us, would give the Pentagon the right to use Anthropic AI regardless of what the company wants.
The DoD is also reserving the right to declare Anthropic a supply chain risk, essentially forcing any company that contracts with the US government to eliminate Anthropic software anywhere it's used in their dealings with the federal government. Such a move could be a major financial blow to the AI provider.
Additionally, sources familiar with the meeting told us that the Pentagon was ready and willing to terminate the up to $200 million contract the agency signed with Anthropic (simultaneous to agreements with Google, OpenAI, and xAI) if the company doesn't agree to its terms.
We're told that Anthropic has maintained its red line for use of its AI by the US military, which includes autonomous weapons that use AI to make final targeting decisions, and domestic surveillance of American citizens, even if lawful.
The Pentagon told us that it has always followed the law, has only issued lawful orders, and its intended use of Anthropic's AI has nothing to do with mass surveillance or autonomous weapon usage.
Legal usage of Anthropic's AI, the Pentagon official said, is the department's responsibility as the end user - not Anthropic's.
Coincidentally or not, Anthropic also released the third iteration of its Responsible Scaling Policy on Tuesday, the same day Amodei met with Hegseth in Washington, DC. The new version lacks a key safety pledge that Anthropic has been pushing for years.
Prior editions of the RSP included a clause that stated Anthropic would cease training AI models that it couldn't guarantee were safe, and wouldn't release any model without proper risk mitigations in place. Those guarantees are gone, with the company citing the need to remain competitive in the AI space as the reason for their removal.
"We felt that it wouldn't actually help anyone for us to stop training AI models," Anthropic's science chief Jared Kaplan told Time in an interview ahead of the RSP update's release. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead."
According to a blog post outlining changes in the new version of the RSP, AI competitiveness and economic growth have become the driving force in the current policy environment, with Anthropic lamenting the fact that safety discussions have been left on the wayside.
"We remain convinced that effective government engagement on AI safety is both necessary and achievable," Anthropic explained. "But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds."
Anthropic’s admission that its priorities have shifted from safety first to competitiveness begs the question of whether it may be willing to comply with the Pentagon to avoid losing out on a massive contract, risking being blacklisted across the defense industry, and still pressed into service against its wishes.
We reached out to Anthropic to find that out, but didn't hear back before publication. We'll update this story if we do. ®
Source: The register