On Tuesday of this week, neuroscientist, founder and author Gary Marcus sat between OpenAI CEO Sam Altman and Christina Montgomery, director of IBM’s privacy trust, as the three testified before the Senate Judiciary Committee over more of three hours. The senators focused heavily on Altman because he runs one of the most powerful companies on the planet right now and because Altman has repeatedly asked them to help him regulate his work. (Most CEOs beg Congress to leave their industry alone.)
Although Marcus has been known in academic circles for some time, his star has been on the rise of late thanks to his newsletter (“The path to AI we can trust“), a podcast (“humans vs machines“), and his concern regarding the rampant rise of AI. In addition to this week’s hearing, for example, this month he appeared on Bloomberg television and was featured in the New York Times. sunday magazine and cabling among other places.
Because this week’s hearing seemed truly historic in a way: Senator Josh Hawley characterized AI as “one of the most technological innovations in human history,” while Senator John Kennedy was so delighted with Altman which asked Altman to choose its own regulators. We also wanted to speak with Marcus to discuss the experience and see what he knows about what happens next.
Are you still in Washington?
I’m still in Washington. I’m meeting with legislators and their staffs and a number of other interesting people and trying to see if we can turn the kinds of things I talked about into reality.
You have taught at New York University. He has co-founded a couple of artificial intelligence companies, including one with the famous roboticist Rodney Brooks. I interviewed Brooks on stage in 2017 and he said that he didn’t think Elon Musk really understood AI and that he thought Musk was wrong to say that AI was an existential threat.
I think Rod and I share the skepticism that today’s AI is anything like artificial general intelligence. There are several issues that you have to disassemble. One is: are we close to AGI and the other is how dangerous is the current AI that we have? I don’t think the current AI we have is an existential threat, but it is dangerous. In many ways, I think it’s a threat to democracy. That is not a threat to humanity. It’s not going to annihilate all humans. But it’s a pretty serious risk.
Not long ago, you were debating Yann LeCun, Meta’s chief artificial intelligence scientist. i’m not sure what that flap what was it all about: the true meaning of deep learning neural networks?
So LeCun and I have discussed many things for many years. We had a public debate that David Chalmers, the philosopher, moderated in 2017. I’ve been trying to [LeCun] have another real debate since then and he won’t. He prefers to tweet me on Twitter and things like that, which I don’t think is the most adult way to have conversations, but since he’s an important figure, I reply.
One thing I think we disagree on [currently] is, LeCun thinks it’s okay to wear these [large language models] and that there is no harm possible here. I think he is very wrong about that. There are potential threats to democracy, ranging from deliberate misinformation produced by bad actors, to accidental misinformation, like the law professor who was accused of sexual harassment even though he did not commit it. [to the ability to] subtly shaping people’s political beliefs based on training data that the public doesn’t even know anything about. It’s like social media, but even more insidious. You can also use these tools to manipulate other people and possibly trick them into doing what you want. You can massively scale them. There are definitely risks here.
You said something interesting about Sam Altman on Tuesday, telling the senators that he didn’t tell them what his worst fear is, which you called “pertinent,” and redirecting them to him. What he hasn’t said yet is anything to do with autonomous weapons, which I discussed with him a few years ago as a major concern. I thought it was interesting that the weapons didn’t show up.
We cover a lot of ground, but there’s a lot we don’t get to, including enforcement, which is really important, and national security and autonomous weapons and things like that. There will be several more than [these].
Was there talk of open systems versus closed systems?
He barely showed up. It’s obviously a really complicated and interesting question. It’s not really clear what the correct answer is. You want people to do independent science. Maybe you want to have some kind of license on things that are going to be deployed on a large scale, but they carry particular risks, including security risks. It’s not clear that we want all bad actors to have access to arbitrarily powerful tools. So there are arguments for and against, and probably the correct answer will include allowing a considerable degree of open source but also having some limitations on what can be done and how it can be implemented.
Any specific thoughts on Meta’s strategy of letting their language shape go out into the world for people to play?
I don’t think that’s cool [Meta’s AI technology] LlaMA is there to be honest. I think it was a bit careless. And, you know, that’s literally one of the genies that’s out of the bottle. There was no legal infrastructure in place; they didn’t consult anyone about what they were doing that I don’t know of. Maybe they did, but the decision process with that or, say, Bing, it’s basically just: a company decides we’re going to do this.
But some of the things that companies decide can prove harmful, either in the near future or in the long term. So I think governments and scientists should increasingly have some role in deciding what happens. [through a kind of] FDA for AI where if you want to do a widespread deployment you do a test first. You talk about the costs-benefits. You do another test. And eventually, if we are sure that the benefits outweigh the risks, [you do the] large scale release. But right now, any company at any time can decide to implement something for 100 million customers and do it without any kind of government or scientific oversight. You have to have some system that some impartial authorities can get into.
Where would these impartial authorities come from? Isn’t everyone who knows anything about how these things work working for a company?
I am not. [Canadian computer scientist] Yoshua Bengio is not. There are many scientists who do not work for these companies. It’s a real concern, how do you get enough of those auditors and how do you incentivize them to do it? But there are 100,000 computer scientists with some facet of experience here. Not all of them work for Google or Microsoft under contract.
Would you like to play a role in this artificial intelligence agency?
I’m interested, I feel like anything we build should be global and neutral, presumably not-for-profit, and I think I have a good neutral voice here that I’d like to share and try to get us to a good place.
How did it feel to be sitting before the Senate Judiciary Committee? And do you think they will invite you back?
I wouldn’t be surprised if they invited me back, but I have no idea. It really touched me deeply and I was deeply touched to be in that room. It’s a little smaller than on TV I guess. But it felt like everyone was there to try to do the best they could for America, for humanity. Everyone knew the weight of the moment, and by all accounts, the Senators played their best game. We knew we were there for a reason and we did our best.