Site icon Adarsh News

‘Godfather of AI’ sounds alarm on Google weapons plan

The “godfather of AI,” Geoffrey Hinton, who was instrumental in Google’s artificial intelligence initiatives, has criticized the company for prioritizing profits over safety after it retracted its commitment to refrain from using AI in weaponry. Hinton, a British computer scientist who received the Nobel Prize in Physics last year for his contributions to AI, described Google’s reversal as a “sad example” of corporations disregarding AI-related concerns.

He remarked, “It is another sad example of how companies behave when there is a conflict between safety and profits.” Recently, Google eliminated a long-standing commitment from its principles that prohibited the use of AI for developing weapons that could harm individuals, arguing that democratic nations need to leverage the technology for national security in an “increasingly complex geopolitical landscape.”

Hinton’s remarks represent his most pointed criticism of Google since he departed the company two years ago, driven by concerns over the uncontrollable nature of the technology. In 2012, he and two students at the University of Toronto created the neural network technology that underpins modern AI systems. After Google acquired his start-up the following year, he contributed significantly to the company’s AI advancements, which led to the creation of chatbots like ChatGPT and Google’s Gemini. He left in 2023 to freely express his concerns about reckless AI decisions made by companies.

At that time, Hinton expressed regret over his life’s work and voiced worries about the “existential risk” posed by increasingly intelligent AI systems. In a recent announcement, James Manyika, senior vice-president at Google-Alphabet, and Sir Demis Hassabis, CEO of Google DeepMind, stated, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”

Stuart Russell, a fellow British computer scientist, expressed that the decision by Sir Demis was “upsetting” and “distressing.” He highlighted the potential danger of companies creating “very cheap weapons of mass destruction that are easy to proliferate” using AI that requires no human oversight. He questioned, “Why is Google contributing to this?” Russell noted that, unlike the pursuit of AI superintelligence, an AI weapon could be “dumb,” emphasizing, “You don’t have to be that smart to kill people. 

Exit mobile version