Anthropic Pushes Back on Pentagon Pressure to Loosen AI Use Restrictions
While Google, OpenAI, and xAI complied with the military’s request to adjust their terms of service, Anthropic—the first AI company to be approved for classified military networks—refuses to concede to the “any lawful purpose” as defined by the Department.
News
- Meta’s AI Strategy Signals a New Reality: Compute Is the Competitive Moat
- Anthropic Pushes Back on Pentagon Pressure to Loosen AI Use Restrictions
- Amazon’s $50B OpenAI Investment Tied to AGI Milestone or IPO: Report
- Perplexity Unveils AI System Built to Run Entire Workflows
- Salesforce CEO Dismisses SaaSpocalypse Fears
- UAE PropTech Startup Rewa Reimagines Rent as a Rewards Economy
[Image source: Pankaj Kirdatt/MITSMR Middle East]
After the U.S. military deployed its Claude model in a January operation that led to the capture of former Venezuelan President Nicolás Maduro, Anthropic is now locked in a high-stakes dispute with the Pentagon over how far its artificial intelligence can be used in defense applications.
The standoff escalated after the U.S. Department of Defense awarded the company a contract worth up to $200 million in July 2025 as part of broader efforts to integrate advanced AI into national security.
However, Anthropic CEO Dario Amodei says the company cannot accede to the Pentagon’s request to loosen certain safeguards governing the military use of its large language models.
The first AI company to be approved for classified military networks, Anthropic was noted to engage in pinpointing high-impact frontier AI applications, building DOD-data fine-tuned prototypes, countering adversarial AI threats through risk forecasting, and sharing technical insights for faster and responsible AI integration across defence operations.
Despite the announcement of military engagement, the startup reaffirmed its commitment to responsible AI. “Our commitment to responsible AI deployment, including rigorous safety testing, collaborative governance development, and strict usage policies, makes Claude uniquely suited for sensitive national security applications,” it stated in an official blog.
As of last week, Google, OpenAI, and xAI have complied with the military’s request to adjust their terms of service, allowing their models to be applied to “any lawful purpose” as defined by the Department.
However, Anthropic is not backing down. “We cannot in good conscience accede to their request,” the company said in an official statement released hours before the deadline given by the Pentagon to decide.
Citing how the startup put the nation’s AI and defense needs above all, including its own short-term financial interests, the top boss, Amodei, noted that it never raised objections to particular military operations nor attempted to limit the use of our technology in an ad hoc manner.
He explicitly noted two use cases: mass surveillance and fully autonomous weapons, “have never been included in our contracts with the Department of War, and we believe they should not be included now.”
“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values…Even fully autonomous weapons (those that take humans out of the loop entirely and automate target selection and engagement) may prove critical to our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” he said.
“To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date,” he added.
US Under Secretary of War Emil Michael responded to the rejection, calling Amodei “a liar” with “a God-complex.”
“He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to the whims of any one for-profit tech company,” Michael posted on X.
Many users called him out, asking him to elaborate on exactly what he disagreed with about the AI startup.
Chief Pentagon spokesman Sean Parnell refuted the claims of the Department seeking to use AI for unlawful purposes.
“The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal), nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media,” Parnell stated in a post on X.
The AI startup is at risk of being removed from the Pentagon’s systems if the company maintains the safeguards, the Pentagon designates them as a supply chain risk, and the Defense Production Act is invoked to force their removal.



