AP News • 2/14/2026 – 2/27/2026

Anthropic, an AI company known for its technology Claude, has been officially designated by the Pentagon as a "supply-chain risk" following failed negotiations over acceptable use policies. This designation, which typically applies to foreign companies with ties to adversarial governments, is unprecedented for an American company. The Pentagon's decision could prevent defense contractors from collaborating with Anthropic if they utilize its AI program in their products. This escalation in the dispute comes after weeks of public ultimatums and threats of lawsuits from the Defense Department. The conflict intensified when former President Donald Trump directed federal agencies to "IMMEDIATELY CEASE" using Anthropic's products, accusing the company of attempting to "STRONG-ARM" the Pentagon. This directive followed Anthropic CEO Dario Amodei's refusal to agree to an updated contract that would permit the military to use its technology for "any lawful use," including mass domestic surveillance. The Pentagon's designation of Anthropic as a supply chain risk is effective immediately and complicates the company's position within the defense sector. Dario Amodei has expressed ethical concerns regarding the unchecked use of AI by the government, which has contributed to the ongoing tensions between Anthropic and the Pentagon. The situation has raised alarms within the AI industry, with some commentators describing Trump's actions as "attempted corporate murder," indicating potential negative impacts on innovation and collaboration in the sector. As Anthropic's $200 million contract with the Pentagon fell apart, the Department of Defense has turned to OpenAI for its AI needs, further complicating Anthropic's future in federal contracts.
Advertisement













