Google Joins OpenAI, xAI for a Classified AI Deal with Pentagon

The agreement will require Google to adjust its AI safety settings and filters and does not give the tech firm the right to control or veto the government's request.

Topics

  • Image Credit- Krishna Prasad/ MIT Sloan Management Review Middle East

    Search engine giant Google has joined the list of U.S. Department of Defence contractors for technological advancements.

    Reported by The Information, Google’s artificial intelligence models will be used for classified work. The agreement will see the Pentagon use its AI for “any lawful government purpose,” it added.

    ​In 2025, the Pentagon signed agreements worth about $200 million each with major AI labs OpenAI and xAI.  With the latest move, Google’s parent company Alphabet joins them to supply AI models for classified use, including mission planning and weapons targeting.

    ​The agreement will require Google to adjust its AI safety settings and filters at the government’s request. It further clarifies that “the AI System is not intended ​for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”

    Also, Google will not be given the right to control or veto lawful government operational decision-making.

    ​”We believe that providing API ​access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, ​represents a responsible approach to supporting national security,” a spokesperson for Google told Reuters.

    Amid this development, around 600 Google employees signed a letter urging CEO Sundar Pichai to stop the company’s AI systems from being used in classified US military applications. 

    “As people working on AI, we know that these systems can centralise power and that they do make mistakes,” the employees said, adding that the company should prevent “its most unethical and dangerous uses.”

    ​Employees from teams including DeepMind and Cloud have signed the appeal, voicing concerns about the ethical and operational risks of these deployments.

    ​This isn’t the first time internal resistance to government usage of systems have been voiced. In 2018, employees protested, resulting in Google stepping away from the Pentagon’s Project Maven initiative, which used AI to analyse drone footage.

    ​In March, AI startup Anthropic, which was the first to be approved for classified military networks, was removed from the U.S. Department of Defence (DoD) supply chain following a “supply chain risk” designation by Defence Secretary Pete Hegseth.

    ​Anthropic’s Claude AI was utilized by U.S. special forces during an operation in early January 2026 to capture Nicolás Maduro in Caracas.

    ​​“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values…,” an Anthropic blog read.

    The Pentagon has cleared its interest in not using AI to conduct mass surveillance of Americans or to develop weapons that operate without human involvement, but wants ‘any lawful use’ of AI ​to be allowed.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.