South China Morning Post • 2/14/2026 – 2/16/2026

The Pentagon is reportedly close to severing ties with the artificial intelligence firm Anthropic due to frustrations over restrictions on the use of its AI technology, specifically the Claude model. According to Axios, the Pentagon may designate Anthropic as a supply chain risk as discussions regarding the extension of their contract have stalled. The delays are attributed to additional safeguards that Anthropic wishes to implement on Claude, which are intended to prevent its use for certain applications. The Wall Street Journal has highlighted that the Pentagon utilized Anthropic's Claude during a military operation to capture Venezuelan President Nicolás Maduro. This operation involved significant military action in Caracas, resulting in the deaths of 83 individuals, as reported by Venezuela’s defense ministry. The use of Claude in this context raises concerns, as Anthropic's terms of use explicitly prohibit the application of its AI for violent purposes, weapon development, or surveillance activities. The Pentagon is currently negotiating with four AI companies, including Anthropic, to allow military use of their tools for "all lawful purposes." However, the ongoing dispute centers on whether Claude can be employed for mass domestic surveillance and autonomous weaponry, which are contentious issues in the context of military operations and ethical AI use. The situation reflects broader tensions between the military's operational needs and the ethical guidelines set by AI developers.
Advertisement
Stories gain Lindy status through source reputation, network consensus, and time survival.

















