The news about Geoffrey Hinton quitting Google and warning about the dangers of artificial intelligence is accurate. Hinton is indeed known as the “godfather of artificial intelligence” and has made significant contributions to the field. He, along with his students, built a neural net in 2012 that revolutionized the field of image recognition.
Hinton has been vocal about his concerns regarding the misuse of AI and the potential dangers it poses. In an interview with the New York Times, he expressed his worries about the spread of fake information, videos, and pictures online, as well as the impact of AI on the job market.
Hinton also said that he quit Google to be able to speak freely about the dangers of artificial intelligence. He regrets some of his contributions to the field and believes that more attention needs to be paid to the ethical implications of AI.
It is important to note that while Hinton’s concerns are valid, AI also has the potential to bring about significant positive changes in various fields such as healthcare, education, and environmental sustainability. It is crucial to strike a balance between the benefits and risks of this rapidly advancing technology.
Hinton was a key figure in the development of Google’s AI technology, and the approach he pioneered paved the way for current systems such as ChatGPT. His concerns highlight the need for caution and ethical considerations as AI technology continues to evolve and become more widespread.
Google’s chief scientist, Jeff Dean, said in a statement that Google appreciated Hinton’s contributions to the company over the past decade.
“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!”
“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Toby Walsh, the chief scientist at the University of New South Wales AI Institute, said people should be questioning any online media they see now.
“When it comes to any digital data you see – audio or video – you have to entertain the idea that someone has spoofed it.” Hinton was also concerned that AI will eventually replace jobs like paralegals, personal assistants and other “drudge work”, and potentially more in the future.
Hinton’s concerns about AI chatbots are focused on the fact that they can become more intelligent than humans, which could lead to them being exploited by “bad actors.” They can generate a large volume of text automatically, making them an ideal tool for producing highly effective spambots. In addition, authoritarian leaders could manipulate their electorates with the help of AI chatbots, posing a significant threat to democracy.
Hinton is also worried about the “existential risk” of creating a true digital intelligence that surpasses human intelligence. He argues that the kind of intelligence being developed is very different from the intelligence humans possess. AI chatbots, for instance, can know so much more than any one person, thanks to their ability to share information instantly across large networks.
Hinton is not alone in his concerns about the potential dangers of AI. Elon Musk has also spoken out about the need to take AI safety seriously, arguing that the development of digital superintelligence could pose a serious threat to humanity. Valérie Pisano, the chief executive of the Quebec Artificial Intelligence Institute, has also criticized the slapdash approach to safety in AI systems, arguing that it would not be tolerated in any other field.
In the short term, Hinton’s concerns about AI-generated misinformation have already become a reality. With recent upgrades to image generators, such as Midjourney, it is now possible to produce photo-realistic images that are difficult to distinguish from real photographs. This has made it increasingly difficult for people to discern what is true on the internet, as AI-generated photos, videos, and text flood social media platforms. (https://www.biolighttechnologies.com)