WSJ • 2/14/2026 – 3/2/2026

Donald Trump has ordered federal agencies to "IMMEDIATELY CEASE" the use of products from Anthropic, an AI company known for its technology Claude. This directive comes after Anthropic CEO Dario Amodei refused to agree to an updated contract with the U.S. military, which would have permitted the military to use Anthropic's technology for "any lawful use," including mass domestic surveillance. Trump accused Anthropic of attempting to "STRONG-ARM" the Pentagon in this dispute, highlighting tensions between the tech industry and government agencies regarding the ethical implications of AI technology. In response to Trump's order, the Pentagon has designated Anthropic as a supply chain risk, effective immediately. This designation could prevent U.S. military vendors from collaborating with the company, complicating Anthropic's position within the defense sector. The move reflects ongoing concerns about the safety and ethical use of AI technologies, particularly in military applications. Dario Amodei has raised ethical concerns about the unchecked use of AI by the government, which has contributed to the friction between Anthropic and the Pentagon. The situation has sparked alarm within the AI industry, with some commentators describing Trump's actions as "attempted corporate murder," indicating the potential chilling effect on innovation and collaboration in the sector. This development underscores the growing tensions between AI companies and government entities over the future of AI technology and its applications.
Advertisement

