the emotion around The arrival in London of OpenAI CEO Sam Altman was palpable in the queue that snaked around the University College London building ahead of his speech on Wednesday afternoon. Hundreds of eager students and fans of OpenAI’s ChatGPT chatbot had come here to see the UK leg of Altman’s world tour, where he hopes to travel to some 17 cities. This week he has already visited Paris and Warsaw. Last week he was in Lagos. Then he’s on Munich.
But the queue was recorded by a small group of people who had traveled to voice their anxiety that the AI was moving too fast. “Sam Altman is willing to gamble on humanity in the hope of some kind of transhumanist utopia,” one protester yelled into a megaphone. Ben, another protester, who declined to share his last name in case it affects his job prospects, was also concerned. “We are particularly concerned about the development of future AI models that could be existentially dangerous for the human race.”
Speaking to a packed auditorium of about 1,000 people, Altman seemed calm. Wearing a smart blue suit with patterned green socks, he spoke in clipped responses, always to the point. And his tone was optimistic, as he explained how he believes AI could revitalize the economy. “I am excited that this technology can recapture the lost productivity gains of the last few decades,” he said. But while he didn’t mention the protests abroad, he did admit to his concern about how generative AI could be used to spread disinformation.
“Humans are already good at misinformation, and perhaps GPT models will make it easier. But that’s not what scares me,” she said. “I think one thing that will be different [with AI] it is the interactive, personalized and persuasive capacity of these systems”.
Although OpenAI plans to create ways to make ChatGPT refuse to spread disinformation and plans to create monitoring systems, he said, it will be difficult to mitigate these impacts when the company releases open source models to the public, as it announced several weeks ago. does what it sets out to do. “The OpenAI techniques of what we can do on our own systems won’t work the same.”
Despite that caveat, Altman said it’s important that artificial intelligence doesn’t become over-regulated while the technology is still emerging. The European Parliament is currently debating a law called AI Law, new rules that would shape how companies can develop such models and could create an AI bureau to oversee compliance. The United Kingdom, however, has decided spread responsibility for AI between different regulators, including those covering human rights, health and safety, and competition, rather than creating a dedicated oversight body.
“I think it’s important to get the balance right here,” Altman said, alluding to the debates now taking place among lawmakers around the world about how to create rules for AI that protect societies from potential harm without stifling innovation. . “The correct answer is probably somewhere between the traditional European-British approach and the traditional American approach,” Altman said. “I hope we can all do well together this time.”