Google Loosens AI Ethics Policy, Drops Restrictions on Weapons and Surveillance

Google’s updated AI ethics policy removes the company’s promise not to use its technology for weapons or surveillance applications. The original principles published in 2018 stated that Google would not pursue technology intended to injure people or surveil beyond international norms. However, this language is now gone from the updated principles page.

The shift comes as the artificial intelligence landscape has advanced rapidly since OpenAI’s launch of chatbot ChatGPT in 2022. Despite the growing use of AI, legislation and regulations have yet to catch up on transparency and ethics. Google’s decision seems to loosen its self-imposed restrictions.

A blog post by senior vice president James Manyika and DeepMind head Demis Hassabis explains that democratic countries’ AI frameworks have deepened Google’s understanding of AI’s potential and risks. They advocate for a global competition in AI leadership, guided by core values such as freedom, equality, and human rights. The companies believe that like-minded organizations should collaborate to create AI that protects people, promotes growth, and supports national security.

Google first published its AI Principles in 2018, citing the need for clear policies before pursuing AI applications. At the time, over 4,000 employees signed a petition opposing the use of Google’s technology for warfare purposes, leading some to resign. The company has since dropped a $10 billion bid for a Pentagon contract due to alignment concerns with its principles.

Source: https://edition.cnn.com/2025/02/04/business/google-ai-weapons-surveillance/index.html