Article content
Since the fallout with Anthropic, the Pentagon has accelerated its efforts to bring on other AI companies to agree to expanded usage terms for their models and infrastructure on secret and top-secret networks. In addition, defence officials are seeking to ensure the U.S. military avoids depending on any one single company or set of limitations, according to one of the Pentagon officials briefed on the talks.
Article content
Nvidia’s new agreement, for instance, gives far greater license to the Pentagon than the terms of use in previous AI deals. The company has agreed not to impose any usage policies or model licenses that would restrict the Defense Department’s use of its models beyond what is required by U.S. law and constitutional authority, according to a person familiar with the agreement, who asked not to be named to discuss sensitive matters.
Article content
Nvidia agreed to provide “full and effective use of their capabilities in support of Department missions,” including for autonomous weapons systems development, according to the person.
Article content
The Department’s use of any Nvidia models, weights or other capabilities will be consistent with the civil liberties and constitutional rights of Americans under law, the person said, a commitment that stops short of any clearly stipulated monitoring and evaluation mechanisms.
Article content
Article content
The agency gave itself six months to replace Claude, which is being used for U.S. military operations against Iran. The disagreement is now mired in a court battle.
Article content
On Thursday, Secretary of Defense Pete Hegseth described Anthropic’s leader as an “ideological lunatic” and defended his department’s use of AI.
Article content
“We follow the law and humans make decisions,” Hegseth told Congress. “AI is not making lethal decisions.”
Article content
The Pentagon’s effort to equip the U.S. military with cutting-edge AI at the classified level will help “human-machine teams” that can handle immense volumes of data, said Cameron Stanley, the defence agency’s chief digital and AI officer, in a statement referring to the new deals.
Article content
Although OpenAI signed a new agreement for expanded use of its models on classified networks with the Pentagon earlier this year, its tools are still not deployed on classified defence networks, according to an OpenAI spokesperson, who added that implementation is nevertheless underway.
Article content
Several campaign groups have highlighted the risks of relying on unpredictable AI-assisted systems in support of life-and-death decisions. AI systems can be prone to error and can lead to automation bias, or a tendency to trust machine outputs over human reasoning, the critics have argued.
Article content
Article content
Stanley didn’t specify the precise ways in which the Pentagon intends to use AI models in classified operations. He described them as digital tools that would make it easier for the Pentagon to crunch through data, increase understanding in complex environments and make “better decisions, faster.”
Article content
Claude is among the AI tools used on Maven Smart System, a digital platform used in support of targeting and battlefield operations during Iran operations. U.S. Central Command has said it is using a variety of AI tools to speed processes.
Article content
Article content
We apologize, but this video has failed to load.
Article content

18 hours ago
8
English (US)