“A lot of the headlines have said that I think it should stop now, and I’ve never said that,” he says. “First of all, I don’t think that’s possible, and I think we should continue to develop it because it could do amazing things. But we must equally strive to mitigate or prevent possible bad consequences.”
Hinton says he didn’t leave Google to protest its handling of this new form of AI. In fact, he says, the company moved relatively cautiously despite having an advantage in the area. Google researchers invented a type of neural network known as a transformer, which has been crucial to the development of models like PaLM and GPT-4.
In the 1980s, Hinton, a professor at the University of Toronto, along with a handful of other researchers, sought to make computers more intelligent by training artificial neural networks on data rather than programming them in the conventional way. The networks could digest pixels as input, and as they saw more examples, they would adjust the values by connecting their crudely simulated neurons until the system could recognize the content of an image. Focus showed promising attacks over the years, but it wasn’t until a decade ago that its true power and potential it became apparent.
In 2018, Hinton received the Turing Award, the most prestigious award in computer science, for his work on neural networks. He received the award along with two other pioneering figures, Yann Le CunMeta’s chief artificial intelligence scientist, and Joshua BengioProfessor at the University of Montreal.
That’s when a new generation of multi-layered artificial neural networks, fed vast amounts of training data and running on powerful computer chips, were suddenly far better at tagging photo content than any existing program.
The technique, known as deep learning, sparked a renaissance in artificial intelligence, with big tech companies rushing to recruit AI experts, build ever more powerful deep learning algorithms, and apply them to products like facial recognition, translation, and speech recognition.
Google hired Hinton in 2013 after acquiring his company, DNNResearch, founded to commercialize deep learning ideas from his university lab. Two years later, one of Hinton’s graduate students who had also joined Google, Ilya Sutskever, left the search firm to co-found OpenAI as a nonprofit counterbalance to the power big tech companies were accruing in AI.
Since its inception, OpenAI has focused on increasing the size of neural networks, the volume of data they consume, and the computing power they consume. In 2019, the company reorganized as a for-profit corporation with outside investors, then took $10 billion from Microsoft. It has developed a series of surprisingly smooth text generation systems, most recently GPT-4, which powers the premium version of ChatGPT and has amazed researchers with its ability to perform tasks that seem to require reasoning and common sense.
Hinton believes that we already have technology that will be disruptive and destabilizing. He points to the risk, as others have, that more advanced language algorithms could launch more sophisticated disinformation campaigns and interfere in elections.