In short
- OpenAI signed a contract with the Pentagon to release AI in classified environments.
- The company stated it enforced “red lines,” however the agreement permits “all legal functions,” a requirement that eventually depends upon the federal government’s own analysis.
- The debate triggered the QuitGPT motion and drove a rise in Claude downloads.
OpenAI stated this weekend that it reached a contract with the Pentagon to release innovative AI systems in categorized environments, marking a considerable growth of the business’s deal with the U.S. armed force.
The statement came less than 24 hr after the Trump administration blacklisted Anthropic, designating the competing AI company a “supply chain threat to nationwide security” following a disagreement over agreement language associated to monitoring and self-governing weapons.
President Donald Trump likewise directed federal companies to right away stop utilizing Anthropic’s innovation, with Treasury Secretary Scott Bessent composing Monday on X that the company “is ending all usage of Anthropic items, consisting of using its Claude platform, within our department.”
” THE UNITED STATES OF AMERICA WILL NEVER EVER ENABLE An EXTREME LEFT, WOKE BUSINESS TO DETERMINE HOW OUR EXCELLENT ARMED FORCE BATTLES AND WINS WARS! That choice comes from YOUR COMMANDER-IN-CHIEF, and the significant leaders I designate to run our Armed force.
The Leftwing nut tasks at Anthropic … pic.twitter.com/aIEx92nnyx
— The White Home (@WhiteHouse) February 27, 2026
The timing of the AI statements put OpenAI’s offer under extreme examination. In an in-depth article, the business detailed what it referred to as company “red lines” and layered safeguards governing its Pentagon collaboration.
The contract, as provided by OpenAI, raises wider concerns about how AI systems will be governed in nationwide security settings, and how the business’s mentioned limitations will be analyzed and imposed in practice.
When “legal” isn’t adequate
OpenAI’s article opens with 3 dedications framed as non-negotiable: no usage of its innovation for mass domestic monitoring, to individually direct self-governing weapons systems, or for high-stakes automatic choices like social credit report.
Then comes the real agreement language– which OpenAI especially calls “the pertinent language,” not “the complete contract.”
” The Department of War might utilize the AI system for all legal functions, constant with suitable law, functional requirements, and reputable security and oversight procedures,” OpenAI stated.
That is the specific expression Anthropic stated the federal government had actually been requiring throughout settlements. The specific expression that Anthropic declined to accompany. OpenAI signed it, yet argues its red lines stay completely undamaged.
Nevertheless, “legal” in nationwide security contexts isn’t a repaired border– it lives inside a patchwork of statutes, executive orders, internal regulations, and frequently classified legal analyses. When an agreement grants “all legal functions,” the useful limitation ends up being the federal government’s existing legal envelope, not an independent basic set by the supplier.
A cluster of stipulations
The weapons arrangement checks out that the AI system “will not be utilized to individually direct self-governing weapons in any case where law, policy, or department policy needs human control.”
The restriction just uses where some other authority currently needs human control– it obtains its teeth totally from existing policy, particularly DoD Regulation 3000.09. That instruction needs self-governing systems to enable leaders to work out “suitable levels of human judgment over using force.”
And “suitable” is as subjective as can be.
Human judgment is not human control. This difference was not unexpected. Defense scholars have actually kept in mind that leaving out “human-in-the-loop” language was intentional, exactly to protect functional versatility.
OpenAI’s greatest counterargument is its cloud-only release architecture– completely self-governing deadly choice loops would need edge release on battleground gadgets, which this agreement does not allow. That’s a genuine technical restriction.
However cloud-based AI can still carry out target recognition, pattern-of-life analysis, and objective preparation. Those are kill-chain activities despite where the last trigger sits. The result for a target does not vary based upon which server the design operates on.
The monitoring stipulation follows a comparable pattern. OpenAI’s mentioned red line: no mass domestic monitoring. The agreement language: The system “will not be utilized for unconstrained tracking of U.S. individuals’ personal info as constant with these authorities”– then notes the 4th Modification, FISA, and Executive Order 12333.
The word “unconstrained” indicates a constrained variation of mass monitoring would be allowable. And EO 12333 is the executive order the NSA has actually utilized to validate obstructing Americans’ interactions when done outside U.S. borders.
And this is where Anthropic’s issues about phrasing throughout the settlements ends up being obvious. Anthropic’s argument was that existing law hasn’t overtaken what AI enables. The federal government can lawfully buy large quantities of aggregated industrial information about Americans without a warrant– and has actually currently done so.
OpenAI’s agreement language, by anchoring its defenses to existing legal structures, might not close the space Anthropic was in fact stressed over.
Altman reacts
On Saturday night, Altman held an AMA reacting to countless concerns about the offer. When asked what would trigger OpenAI to leave a federal government collaboration, he addressed: “If we were asked to do something unconstitutional or prohibited, we will leave.”
If we were asked to do something unconstitutional or prohibited, we will leave. Please come visit me in prison if needed.
— Sam Altman (@sama) March 1, 2026
That framing locations OpenAI’s limitation at legality– not at an independent ethical judgment about what the business will or will not make it possible for if it takes place to be legal, which is what Anthropic safeguards. Asked whether he stressed over future disagreements over what counts as “legal,” he acknowledged the threat: “If we need to handle that battle we will, however it plainly exposes us to some threat.”
On why OpenAI reached an offer where Anthropic might not, Altman used this: “Anthropic appeared more concentrated on particular restrictions in the agreement, instead of pointing out suitable laws, which we felt comfy with. I ‘d plainly rather count on technical safeguards if I just needed to select one. I believe Anthropic might have desired more functional control than we did.”
That’s a substantive philosophical distinction. Anthropic argued that due to the fact that frontier designs can be repurposed for intelligence and military workflows in manner ins which are tough to expect, the limitations require to be specific and binding in composing, even at the expense of the offer. OpenAI’s position is that technical architecture, ingrained workers, and existing law together make up a more powerful secure than legal text alone.
The general public selected a side
The reaction was instant. By Monday, the “QuitGPT” motion declared that over 1.5 million individuals had actually done something about it– canceling memberships, sharing boycott posts, or registering at quitgpt.org.
The project framed OpenAI’s relocation as focusing on military agreements over user security, implicating the business of consenting to let the Pentagon utilize its innovation for “any legal function, consisting of killer robotics and mass monitoring.”
OpenAI may object to that characterization. However the marketplace moved regardless.
Anthropic’s Claude rose previous ChatGPT to end up being the most downloaded totally free app in the United States on Apple’s App Shop, with the business informing Decrypt that it saw record day-to-day signups over the weekend.
Pop star Katy Perry shared a screenshot of Claude’s rates page on X. Numerous users recorded their membership cancellations openly on Reddit. Graffiti applauding Anthropic appeared outside its San Francisco workplaces, while chalk attacks covered OpenAI’s walkways. Even numerous OpenAI’s own staff members had actually formerly signed an open letter supporting Anthropic’s rejection to accede to Pentagon needs.
The QuitGPT framing is mentally engaging, however not totally accurate. Anthropic itself has a collaboration with Palantir and Amazon Web Provider that grants U.S. intelligence companies and defense departments access to Claude designs, and has actually presumably been utilized in military operations to topple the federal governments of Venezuela and Iran. The principles of AI and nationwide security contracting were never ever tidy on either side.
What the project caught, precisely, is that a big section of users thought there was a significant distinction in between how the 2 business drew their limitations– and voted with their memberships.
Whether that distinction is as significant as it appears needs checking out the agreement thoroughly.
Daily Debrief Newsletter
Start every day with the leading newspaper article today, plus initial functions, a podcast, videos and more.
