Skip to content

OpenAI is set to become the Pentagon’s next military tool: Donald Trump harshly criticises Anthropic after failed talks.

People in military attire and suits discussing documents marked "Classified" at a table with a laptop and phone nearby.

OpenAI has struck a deal with the US Department of Defense, clearing the way for its generative AI systems to be deployed in classified settings after separate talks between the Pentagon and rival firm Anthropic reportedly broke down.

The agreement, announced by OpenAI over the weekend, will allow the creator of ChatGPT to supply “advanced AI systems” for use in secure, classified environments. OpenAI also said the Pentagon had asked that similar access be made available across the wider AI industry.

OpenAI confirms Pentagon deal for classified deployments

In a statement published on its website, OpenAI said it had reached an arrangement with the Pentagon covering the roll-out of advanced AI in classified contexts. The company presented the move as part of a broader effort to make such deployments accessible to other AI specialists as well.

Generative AI tools are widely promoted for boosting workplace efficiency, but they are increasingly being explored for defence and security purposes, including planning and analysis. The latest deal formalises OpenAI’s entry into that sensitive arena.

OpenAI says its restrictions rule out mass surveillance and autonomous weapons

OpenAI insisted its technology will remain subject to safeguards. The firm said it will not allow its models to be used for mass surveillance in the United States, to control autonomous weapons systems, or to support what it described as “high-risk automated decisions” such as social credit-style scoring.

Trump and Hegseth target Anthropic after negotiations stall

The OpenAI announcement comes as Anthropic - the developer of the Claude chatbot led by chief executive Dario Amodei - faces political attacks from Donald Trump and Pete Hegseth, described in the article as the US Secretary for War, after Anthropic said its own discussions with the Department of War had reached an impasse.

Anthropic said the talks broke down over two conditions it would not waive: that its AI models should not be used for mass surveillance of Americans, and should not be used in autonomous weapons.

On his Truth Social platform, Trump labelled Anthropic a “woke” and “radical left” company and said he would not allow it to dictate “how our great army fights and wins wars”. He also wrote that “the left-wing crazies at Anthropic” had made a “CATASTROPHIC MISTAKE” by trying to force the Department of War to follow its usage rules rather than the US Constitution.

Trump also said he had instructed all federal agencies to stop using Anthropic’s technology.

Hegseth orders supply-chain ban involving the US military

Hegseth said he was directing the Department of War to designate Anthropic as a national security risk within its supply chain. He added that, with immediate effect, no contractor, supplier or partner doing business with the US military would be permitted to conduct any commercial activity with Anthropic.

Anthropic responded that it had negotiated “in good faith” in an attempt to find common ground with the Pentagon. The company said it was saddened by the designation, arguing such measures have “historically” been reserved for US adversaries and had not previously been applied publicly to an American company.

Claude climbs to top spot on the App Store

The row appears to have raised Anthropic’s profile among consumers, despite Claude already being well regarded in corporate settings. The Claude app has climbed to number one in the App Store rankings in the United States, and also in France, according to the article.

Comments

No comments yet. Be the first to comment!

Leave a Comment