The Pentagon is reportedly moving ahead with alternatives to Anthropic after relations between the company and the Department of Defense deteriorated over restrictions on military AI use. According to TechCrunch, which cites a Bloomberg interview with Pentagon chief digital and AI officer Cameron Stanley, the department is actively pursuing multiple large language models for use inside government-owned environments.
Stanley said engineering work has already begun and that these models are expected to become available for operational use in the near term. The reported shift follows the collapse of Anthropic’s $200 million Department of Defense contract after both sides failed to agree on the level of access and operational freedom the military would have over Anthropic’s systems.
TechCrunch says Anthropic wanted contractual safeguards preventing the Pentagon from using its AI for mass surveillance of Americans or for weapons systems capable of firing without human intervention. The Pentagon did not accept those limits. In the aftermath, OpenAI reportedly signed its own agreement with the Pentagon, while Elon Musk’s xAI also reached a deal to make Grok available in classified systems.
The article says the Pentagon now appears to be preparing to phase Anthropic’s technology out of its workflows entirely. That conclusion is reinforced by the fact that Defense Secretary Pete Hegseth has designated Anthropic as a supply-chain risk, a label that can prevent Pentagon-linked contractors from working with the company. Anthropic is challenging that designation in court.
The broader significance of the dispute goes beyond a single contract. It highlights how tensions between frontier AI developers and national security agencies are no longer limited to procurement or cost, but increasingly center on governance, acceptable use and the extent to which AI suppliers can impose ethical or operational boundaries on government customers.