Google has removed the part of its AI policy that previously prohibited the development and deployment of AI for weapons or surveillance.
When Google first published its AI policy in 2018, it included a section called “applications we won’t pursue”, in which the company pledged not to develop or deploy AI for weapons or surveillance.
Now it has removed that section and replaced it with another, Bloomberg reports. Records indicate that the previous text was still there as recently as last week.
Instead, the section has been replaced by “Responsible development and deployment”, where Google states that the company will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights”.
In connection with the changes, Google refers to a blog post in which the company writes that the policy change is necessary, as AI is now used for more general purposes.
Thousands of employees protested
In 2018, Google signed a controversial government contract called Project Maven, which effectively meant that the company would provide AI software to the Department of Defense to analyze drone images. Thousands of Google employees signed a protest against the contract and dozens chose to leave.
It was in the context of that contract that Google published its AI guidelines, in which it promised not to use AI as a weapon. The tech giant’s CEO, Sundar Pichai, reportedly told staff that he hoped the guidelines would stand the “test of time”.
In 2021, the company signed a new military contract to provide cloud services to the US military. In the same year, it also signed a contract with the Israeli military, called Project Nimbus, which also provides cloud services for the country. In January this year, it also emerged that Google employees were working with Israel’s Ministry of Defense to expand the government’s use of AI tools, as reported by The Washington Post.