The Guardian • 2/14/2026

The Wall Street Journal reported that the US military utilized Anthropic’s AI model, Claude, during a military operation aimed at capturing Venezuelan President Nicolás Maduro. This operation involved significant military action in Caracas, resulting in the deaths of 83 individuals, as stated by Venezuela’s defense ministry. The use of Claude in this context raises ethical concerns, as Anthropic's terms of use explicitly prohibit the application of its AI for violent purposes, weapon development, or surveillance activities. The Pentagon is reportedly close to severing ties with Anthropic due to frustrations over restrictions on the use of its AI technology, particularly the Claude model. Discussions regarding the extension of their contract have stalled, with delays attributed to additional safeguards that Anthropic wishes to implement on Claude. These safeguards are intended to prevent the AI's use for certain applications, which has led to tensions between the military's operational needs and the ethical guidelines set by AI developers. Currently, the Pentagon is negotiating with four AI companies, including Anthropic, to allow military use of their tools for "all lawful purposes." However, the ongoing dispute centers on whether Claude can be employed for mass domestic surveillance and autonomous weaponry, which are contentious issues in the context of military operations and ethical AI use. The situation reflects broader tensions between the military's requirements and the ethical considerations surrounding AI technology.
Advertisement













