On February 27, 2026, a hypothetical scenario unfolds where US Secretary of Defense Pete Hegseth designated Anthropic, a San Francisco AI company, as a “supply chain risk to national security.” This label, previously applied to Chinese firms like Huawei and ZTE for alleged surveillance backdoors, was now used against an American company founded by former OpenAI researchers. Anthropic's “offense” was its refusal to allow the US military to use its AI models for mass domestic surveillance of American citizens or for fully autonomous lethal weapons. Hours after Anthropic's blacklisting, OpenAI CEO Sam Altman announced his company had reached its own deal with the Pentagon, stating his models would be available for all lawful purposes.
That same evening, Caitlin Kalinowski, OpenAI’s most senior hardware executive who had spent 16 months building the company's robotics program, announced her resignation. She stated that “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” As it turned out, these crucial ethical lines had not been deliberated at all, but rather drawn in a contract dispute and then erased in a Friday-afternoon press release.
While this story can be viewed as a power struggle between two American companies and the US administration, that perspective is incomplete. The events between Anthropic, OpenAI, and the Pentagon in early 2026 also illuminate a critical narrative about democratic governance: who gets to establish the terms for deploying the era's most consequential technologies, and what transpires when a government prioritizes immediate compliance.
The rapid pace of these events obscured their significance, warranting a clear exposition. Anthropic had secured a $200 million Pentagon contract in July 2025 for classified systems work. This contract included two specific restrictions: the Claude model could not be used for mass domestic surveillance of American citizens, nor could it power fully autonomous weapons without a human in the targeting loop. These were not novel demands, aligning with longstanding prohibitions in international humanitarian law and US constitutional protections. By any reasonable standard, these represented safeguards a democratic government should seek to embed in its AI systems. However, the Pentagon disagreed, demanding “unrestricted access to AI for all lawful purposes” in its final ultimatum. When Anthropic refused to remove its restrictions, Hegseth set a deadline of 5:01 pm on February 27, which passed without agreement. Former President Trump, writing on Truth Social, subsequently labeled Anthropic’s leadership as “leftwing nut jobs.”