An artificial intelligence pioneer nicknamed the “Godfather of AI” has resigned from his position at Big Tech firm Google so he could speak more openly about the potential dangers of the technology.
Before resigning, Dr. Geoffrey Hinton worked at Google on machine learning algorithms for more than a decade. He reportedly earned his nickname due to his lifelong work on neural networks.
However, in a tweet on May 1, Hinton clarified that he left his position at Google “so that I could talk about the dangers of AI.”
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
In an interview with The New York Times, his most immediate concern with AI was its use in flooding the internet with fake photos, videos and text, to the extent that many won’t “be able to know what is true anymore.”
Hinton’s other worries concerned AI tech taking over jobs. In the future, he believes AI could pose a threat to humanity due to it learning unexpected behaviors from the massive amounts of data it analyzes.
He also expressed concern at the continuing AI arms race that seeks to further develop the tech for use in lethal autonomous weapons systems (LAWS).
Hinton also expressed some partial regret over his life's work:
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
In recent months, regulators, lawmakers and tech industry executives have also expressed concern about the development of AI. In March, over 2,600 tech executives and researchers signed an open letter in March that urged for a temporary halt of AI development citing “profound risks to society and humanity.”
A group of 12 European Union lawmakers signed a similar letter in April and a recent EU draft bill classifies AI tools based on their risk levels. The United Kingdom is also extending $125 million to support a task force for the development of “safe AI.”
AI used in fake news campaigns and pranks
AI tools are already reportedly being used for disinformation, with recent examples of media outlets tricked into publishing fake news, while one German outlet even used AI to fabricate an interview.
On May 1, Binance claimed it was the victim of a ChatGPT-originated smear campaign and shared evidence of the chatbot claiming its CEO Changpeng “CZ” Zhao was a member of a Chinese Communist Party youth organization.
To all the crypto and AI sleuths out there, here is the ChatGPT thread if someone wants to dig in. As you'll see ChatGPT pulls this from a fake LinkedIn profile and a non-existent @Forbes article. We can't find any evidence of this story nor the LinkedIn page ever existing. pic.twitter.com/szLaix3nza
— Patrick Hillmann ♂️ (@PRHillmann) May 1, 2023
The bot linked to a Forbes article and LinkedIn page that it claimed it sourced the information from, however, the article appears to not exist and the LinkedIn profile isn’t Zhao’s.
Last week, a group of pranksters also tricked multiple media outlets around the world, including the Daily Mail and The Independent.
Related: Scientists in Texas developed a GPT-like AI system that reads minds
The Daily Mail published and later took down a story about a purported Canadian actor called “Saint Von Colucci” who was said to have died after a plastic surgery operation to make him look more like a South Korean pop star.
The news came from a press release regarding the actor’s death, which was sent by an entity masquerading as a public relations firm and used what appeared to be AI-generated images.
In April, the German outlet Die Aktuelle published an interview that used ChatGPT to synthesize a conversation with former Formula One driver Michael Schumacher, who suffered a serious brain injury in a 2013 skiing accident.
It was reported Schumacher’s family would take legal action over the article.
Magazine: AI Eye: ‘Biggest ever’ leap in AI, cool new tools, AIs are real DAOs