Reason Magazine • 2/14/2026 – 2/27/2026

Dario Amodei, CEO of Anthropic, publicly refused the Pentagon's demands to remove certain safeguards from its AI technology, specifically its Claude product. In a blog post, Amodei stated that the company "cannot in good conscience accede" to the Defense Department's request to allow broader use of its AI, which includes applications for mass surveillance and autonomous weapons. The Pentagon had threatened to cancel a $200 million contract with Anthropic and label the company a "supply chain risk" if it did not comply with the demands. Amodei expressed a strong preference to continue supporting the Department of Defense and U.S. military personnel while maintaining the requested safeguards in place. He emphasized the importance of these constraints for the safety and ethical use of AI technology. The standoff arose after the Pentagon insisted that Anthropic make Claude available for "all lawful purposes," which raised concerns about the potential implications for national security and ethical considerations in military applications. In response to Anthropic's refusal, U.S. Under Secretary of Defense Emil Michael accused Amodei of attempting to exert personal control over the U.S. military and compromising national safety. The situation highlights the ongoing tension between AI companies and government entities regarding the ethical use of advanced technologies in military contexts.
Advertisement

















