When Anthropic final yr turned the primary main AI firm cleared by the US authorities for labeled use—together with navy purposes—the information didn’t make a serious splash. However this week a second growth hit like a cannonball: The Pentagon is reconsidering its relationship with the corporate, together with a $200 million contract, ostensibly as a result of the safety-conscious AI agency objects to collaborating in sure lethal operations. The so-called Division of Conflict would possibly even designate Anthropic as a “provide chain danger,” a scarlet letter normally reserved for corporations that do enterprise with international locations scrutinized by federal companies, like China, which suggests the Pentagon wouldn’t do enterprise with corporations utilizing Anthropic’s AI of their protection work. In an announcement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was within the scorching seat. “Our nation requires that our companions be prepared to assist our warfighters win in any battle. In the end, that is about our troops and the protection of the American folks,” he stated. This can be a message to different corporations as properly: OpenAI, xAI and Google, which presently have Division of Protection contracts for unclassified work, are leaping via the requisite hoops to get their very own excessive clearances.
There’s a lot to unpack right here. For one factor, there’s a query of whether or not Anthropic is being punished for complaining about the truth that its AI mannequin Claude was used as a part of the raid to take away Venezuela’s president Nicolás Maduro (that’s what’s being reported; the corporate denies it). There’s additionally the truth that Anthropic publicly helps AI regulation—an outlier stance within the trade and one which runs counter to the administration’s insurance policies. However there’s an even bigger, extra disturbing challenge at play. Will authorities calls for for navy use make AI itself much less secure?
Researchers and executives imagine AI is essentially the most highly effective know-how ever invented. Nearly all the present AI corporations have been based on the premise that it’s doable to attain AGI, or superintelligence, in a method that forestalls widespread hurt. Elon Musk, the founding father of xAI, was as soon as the largest proponent of reining in AI—he cofounded OpenAI as a result of he feared that the know-how was too harmful to be left within the palms of profit-seeking corporations.
Anthropic has carved out an area as essentially the most safety-conscious of all. The corporate’s mission is to have guardrails so deeply built-in into their fashions that unhealthy actors can not exploit AI’s darkest potential. Isaac Asimov stated it first and greatest in his legal guidelines of robotics: A robotic could not injure a human being or, via inaction, enable a human being to return to hurt. Even when AI turns into smarter than any human on Earth—an eventuality that AI leaders fervently imagine in—these guardrails should maintain.
So it appears contradictory that main AI labs are scrambling to get their merchandise into cutting-edge navy and intelligence operations. As the primary main lab with a labeled contract, Anthropic offers the federal government a “customized set of Claude Gov fashions constructed completely for U.S. nationwide safety prospects.” Nonetheless, Anthropic stated it did so with out violating its personal security requirements, together with a prohibition on utilizing Claude to provide or design weapons. Anthropic CEO Dario Amodei has particularly stated he doesn’t need Claude concerned in autonomous weapons or AI authorities surveillance. However that may not work with the present administration. Division of Protection CTO Emil Michael (previously the chief enterprise officer of Uber) informed reporters this week that the federal government received’t tolerate an AI firm limiting how the navy makes use of AI in its weapons. “If there’s a drone swarm popping out of a navy base, what are your choices to take it down? If the human response time will not be quick sufficient … how are you going to?” he requested rhetorically. A lot for the primary legislation of robotics.
There’s a very good argument to be made that efficient nationwide safety requires one of the best tech from essentially the most modern corporations. Whereas even just a few years in the past, some tech corporations flinched at working with the Pentagon, in 2026 they’re typically flag-waving would-be navy contractors. I’ve but to listen to any AI govt discuss their fashions being related to deadly pressure, however Palantir CEO Alex Karp isn’t shy about saying, with obvious satisfaction, “Our product is used from time to time to kill folks.”
