Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
May 1st, The New York Times reported that Geoffrey Hinton, the so-called “Godfather of AI”, had resigned from Google. The reason he gave for this move is that it will allow him to speak freely about the risks of artificial intelligence (AI).
His decision is both surprising and not surprising. The first because he has dedicated his entire life to the advancement of AI technology; the latter given his growing concerns expressed in recent interviews
There is symbolism in this announcement date. May 1 is May Day, known for celebrating workers and the blossoming of spring. Ironically, AI and in particular generative AI based on deep learning neural networks can displace a large part of the workforce. We are already starting to see this impact, for example, at IBM.
AI replacing jobs and moving closer to superintelligence?
No doubt others will follow, as the World Economic Forum sees the potential for 25% of jobs to be affected in the next five years, with AI playing a role. As for the blossoming of spring, generative AI could unleash a new beginning of symbiotic intelligence: man and machine working together in ways that will lead to a renaissance of possibility and abundance.
Alternatively, this could be when the advancement of AI begins to approach superintelligence, possibly posing an exponential threat.
Hinton wants to talk about these kinds of worries and concerns, and he couldn’t do it while working for Google or any other corporation looking to commercially develop AI. As Hinton said in a Twitter post: “I left so I could talk about the dangers of AI without considering how this affects Google.”
May Day
Maybe it’s just a play on words, but the date of the announcement evokes another association: May Day, a commonly used distress signal used when there is immediate and serious danger. A mayday sign should be used when there is a real emergency as it is a priority call to respond to a situation. Is the timing of this news a mere coincidence, or is it intended to symbolically add to its meaning?
According to the Times article, Hinton’s immediate concern is the ability of AI to produce human-quality content in text, video, and images and how bad actors can use that ability to spread misinformation and disinformation in such a way that the average person “will not be able to know what is true more.”
Now he also believes that we are much closer to the time when the machines will be smartest than the smartest people. This point has been much discussed, and most AI experts have seen it as far in the future, perhaps 40 years or more.
The list included Hinton. By contrast, Ray Kurzweil, a former Google engineering director, has claimed for some time that that time will come in 2029 when AI will easily happens he Turing test. Kurzweil’s views on this timeline had been an outlier, but not anymore.
According to Hinton’s May Day interview: “The idea that these things [AI] he could actually outsmart people, some people believed that. But most people thought that she was far away. And I thought that she was very far away. I thought it was 30-50 years away or even more. Obviously, I don’t think that anymore.”
Those 30 to 50 years could have been used to prepare companies, governments and societies through governance practices and regulations, but now the wolf is at the door.
artificial general intelligence
A related topic is the discussion on artificial general intelligence (AGI), the mission of OpenAI and DeepMind and others. The AI systems used today mainly excel at specific and limited tasks, such as reading radiology images or playing games. A single algorithm cannot excel at both types of tasks. In contrast, AGI possesses human-like cognitive abilities such as reasoning, problem-solving, and creativity, and, as a single algorithm or network of algorithms, would perform a wide range of tasks at or better on a human level across different domains.
Much like the debate about when AI will be smarter than humans, at least for specific tasks, predictions about when AGI will be achieved vary widely, from a few years to several decades or centuries or possibly never at all. These timeline predictions are also moving forward due to new generative AI applications, such as ChatGPT, based on Transformer neural networks.
Beyond the intended purposes of these generative AI systems, such as creating compelling images from text prompts or providing human-like text responses in response to queries, these models possess the remarkable ability to exhibit emergent behaviors. This means that AI can exhibit novel, intricate, and unexpected behaviors.
For example, the ability of GPT-3 and GPT-4, the models that support ChatGPT, to generate code is considered emergent behavior, as this ability was not part of the design specification. Instead, this feature emerged as a byproduct of model training. The developers of these models cannot fully explain how or why these behaviors develop. What can be deduced is that these capabilities arise from the large-scale data, the architecture of the transformer, and the powerful pattern recognition capabilities that the models develop.
Deadlines speed up, creating a sense of urgency
It is these advances that are recalibrating the timelines for advanced AI. In a recent CBS News interviewHinton said he now believes the AGI could be achieved in 20 years or less. He added: “We could be” close to computers being able to generate ideas to improve themselves. “That’s a problem, right? We have to think hard about how to control that.”
Early evidence of this ability can be seen with the nascent AutoGPT, an open source recursive AI agent. In addition to being usable by anyone, this means you can autonomously use the output it generates to create new ads, chaining these operations to complete complex tasks.
In this way, AutoGPT could potentially be used to identify areas where the underlying AI models could be improved and then generate new ideas on how to improve them. Not only that, but how The New York Times columnist Thomas Friedman grades, the open source code can be exploited by anyone. He asks: “What would ISIS do with the code?”
It is not a fact that generative AI specifically, or that the general effort to develop AI leads to bad results. However, the acceleration of timelines for more advanced AI brought about by generative AI has created a strong sense of urgency for Hinton and others, clearly leading to their distress signal.
Gary Grossman is Senior Vice President of Technology Practices at edelmann and global leader of the Edelman AI Center of Excellence.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers