The U.S. Department of War’s push for military-grade AI deployment hit a snag this week when OpenAI abruptly revised its terms, inserting stricter safeguards that now mirror the very red lines it had previously criticized in competitors like Anthropic.

Where Anthropic’s refusal to compromise on mass surveillance and autonomous weapons led to the loss of a $200 million contract, OpenAI’s pivot—though framed as an effort to ‘de-escalate’—has triggered its own wave of backlash. The company now explicitly prohibits its systems from being used for domestic tracking, high-stakes automated decisions, or directing autonomous weapons. Yet the timing and language suggest less a principled shift than a reactive adjustment, one that leaves open questions about whether OpenAI can maintain both commercial viability and ethical consistency in high-stakes government work.

Key revisions to the deal include

  • No domestic surveillance: AI systems cannot be intentionally used for tracking or monitoring U.S. persons or nationals, including through commercially acquired personal data.
  • No autonomous weapons: Explicit prohibition against directing systems that could operate independently in lethal contexts.
  • No high-stakes automation: A ban on applications like ‘social credit’ scoring, though the phrasing leaves room for interpretation of what constitutes ‘high stakes.’

The changes reflect a calculated response to both public pressure and legal scrutiny. OpenAI’s CEO acknowledged in an internal post that rushing the initial agreement was a misstep, admitting it appeared ‘opportunistic and sloppy’—a rare moment of self-criticism for a company accustomed to rapid, high-profile moves. Yet the underlying tension remains: how does a company navigate government contracts without compromising its public image or core principles when those principles are still being defined in real time?

monitor

OpenAI insists its safeguards are more robust than any previous classified AI deployment, citing multi-layered oversight and contractual protections. But the comparison to Anthropic’s stance is unavoidable, particularly given that both companies now share nearly identical red lines—suggesting that OpenAI may have learned from its rival’s contract cancellation rather than leading on its own ethical framework.

For users and regulators alike, the question lingers: was this a genuine rethinking or a damage-control maneuver? The surge in ChatGPT uninstalls—reported to have spiked by 295% after the initial deal announcement—hints at the depth of public skepticism. Whether OpenAI can restore trust will depend less on the fine print and more on whether its actions align with its stated principles over time.