Google is ‘proud’ to serve the Pentagon – new DoD contract expansion says Gemini will only be used for ‘any lawful purpose’, but what happened to

(Image credit: Shutterstock)

  • Google is edging into the military/government market
  • New Pentagon contract allows Gemini use for ‘any lawful purpose’
  • Google employees are not happy with the new contract

Google recently expanded its contract with the US Department of Defense (DoD) to provide Gemini for use in classified operations, or for “any lawful purpose”, and has also pulled out of a $100 million Pentagon challenge to build autonomous voice-controlled drone swarms.

At the same time, the company is facing internal dissatisfaction with its decision to provide the Pentagon with Gemini for classified projects, but the company has responded by telling staff it is ‘proud’ of the Pentagon AI contract.

So how have Google’s ethics and policies evolved over time? And are they changing to allow the company to edge into a highly lucrative – although ethically dubious – slice of government pie?

Article continues below

Grounding the drones

Google’s pivot away from its once widely recognized motto of “Don’t Be Evil” may be coming true in the eyes of some Google employees, but it’s not the first time the company has changed its policy. The company’s AI principles once stated that the company would not deploy its AI tools where they were “likely to cause harm,” and would not “design or deploy” AI tools for surveillance or weapons.

Pulling out of the Pentagon competition to create technology capable of turning spoken instructions into commands for an autonomous drone swarm was reported by Google to be a matter of a lack of resources, however the actual cause is reported to be an internal ethics review, Bloomberg reports.

This suggests, at least, that the internal ethics board is still functioning and not entirely toothless.

On the other hand, with the company expanding its Gemini availability into classified networks, the Pentagon is free to use Gemini for “any lawful purpose”. This clause is more bark than bite.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Back before the turn of the century, it was illegal for communications providers to install backdoors for law enforcement purposes – but CALEA and the Patriot Act changed all that. Federal law enforcement was also previously prevented from legally seizing data stored on servers in foreign countries – but the CLOUD Act changed that too.

Things are only illegal until they’re legal, and vice versa, effectively giving the Pentagon a future-proof loophole should their intended use case suddenly be legalized.

Therefore, the “any lawful purpose” clause doesn’t offer any significant protection against using AI for autonomous weapons systems or mass domestic surveillance purposes, as Anthropic protested, and is weakened further by the inclusion of a clause within the Google-DoD contract that states the company does not have “any right to… veto lawful government operational decision-making.” Something OpenAI also encountered in its Pentagon deal.

This gives the Pentagon near-free rein over the direction it chooses to take with Gemini in its classified projects. Mass surveillance has been happening for decades, but AI’s purpose within it all is just to make it smarter, more targeted, and more efficient.

A slice of Pentagon pie

The appeal of working as a government and military contractor is a simple one: there is a lot of money involved. Before the ink had fully fried on Anthropic’s severance from government use, OpenAI had a shiny expanded contract to fill exactly the role Anthropic was looking to avoid.

In a similar way, Microsoft and Amazon have already won numerous contracts involving cloud, AI, and cybersecurity tools, and it appears Google is trying to play catch up.

Google’s employees have been a challenge when it comes to the ethics of working with the government. In 2018, protests by Google employees resulted in the company dropping out of Project MAVEN over the use of Google technology in analyzing drone strike footage. These protests also resulted in Google’s now-missing ‘do no harm’ AI principles.

Google also faced similar dissent when employees opposed the company’s potential involvement in providing technology to Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).

As is tradition, Google’s employees are once again forming digital picket lines, with over 600 signing a letter to CEO Sundar Pichai asking him to reject any use of Google’s AI technology for military purposes.

In response, Kent Walker, Google’s president of global affairs, wrote in an internal memo on Tuesday seen by The Information, “We have proudly worked with defense departments since Google’s earliest days, and we continue to believe that it’s important to support national security in a thoughtful and responsible way.”


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.


Benedict is a Senior Security Writer at TechRadar Pro, where he has specialized in covering the intersection of geopolitics, cyber-warfare, and business security.

Benedict provides detailed analysis on state-sponsored threat actors, APT groups, and the protection of critical national infrastructure, with his reporting bridging the gap between technical threat intelligence and B2B security strategy.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the University of Buckingham Centre for Security and Intelligence Studies (BUCSIS), with his specialization providing him with a robust academic framework for deconstructing complex international conflicts and intelligence operations, and the ability to translate intricate security data into actionable insights.

Comments (0)
Add Comment