OpenAI, the company behind ChatGPT, has quietly deepened its ties with the US military despite previously banning such collaborations. The recent announcement of a deal with the Pentagon, following a failed contract with Anthropic, has sparked internal criticism and questions about the firm’s evolving policies.
This isn’t just about one deal; it reveals a pattern of OpenAI navigating murky ethical ground while engaging with defense agencies.
The Policy Shift: From Ban to Partnership
In 2023, OpenAI explicitly prohibited military access to its AI models. However, the Pentagon was already experimenting with OpenAI technology through Microsoft’s Azure OpenAI Service—a long-standing Microsoft partner and OpenAI investor. Employees reportedly observed Pentagon officials visiting OpenAI headquarters while the ban was in effect, creating confusion about policy enforcement.
OpenAI and Microsoft maintain that Azure OpenAI products were never subject to OpenAI’s usage restrictions, a distinction that allowed military access to continue behind the scenes. By January 2024, OpenAI quietly removed the blanket ban on military use, with employees learning about the change through outside reporting rather than internal communication.
Expansion into Defense: Anduril and Beyond
OpenAI has since partnered with Anduril to develop AI systems for “national security missions.” Initially framed as limited to unclassified workloads, this partnership contrasts with Anthropic’s deal with Palantir, which involved classified military applications. OpenAI even declined Palantir’s offer to join their “FedStart” program, deeming it too risky.
Despite internal debate—some employees questioned the reliability of OpenAI’s AI for critical tasks—the company has moved further into defense. CEO Sam Altman has publicly stated support for responsible AI deployment while simultaneously pursuing contracts with NATO, signaling a broader ambition to sell its models to international defense organizations.
Opacity and Oversight
The lack of transparency surrounding these partnerships is a significant concern. Former OpenAI geopolitics head Sarah Shoker argues that the opacity of military AI hinders understanding of its real-world effects, creating “black boxes all the way down.” Experts suggest that the Pentagon may already be using OpenAI’s AI for legal surveillance, such as purchasing and analyzing user data.
OpenAI has amended its agreement in response to some concerns, but without full disclosure, the public must rely on the company’s word. This raises questions about accountability and the potential for unchecked military applications of AI.
In essence, OpenAI has traded its initial ethical stance for deeper engagement with the defense sector. This shift highlights the tension between commercial interests, national security concerns, and the evolving role of AI in warfare. The lack of public oversight ensures that the full implications of these partnerships will remain largely unknown, leaving civilians and conflict zones vulnerable to the unchecked deployment of military AI.
