Politicians need to learn how AI works: fast\

This week, the US Senators heard alarming testimony suggesting that unchecked AI could steal jobs, spread misinformation, and generally “go bad enough”, in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But there was also agreement at the hearing that no one wants to kick around a technology that could potentially increase productivity and give the US a head start in a new technological revolution.

Concerned senators might consider speaking with miss cummingsto former fighter pilot and professor of engineering and robotics at George Mason University. He studies the use of AI and automation in safety-critical systems, including automobiles and aircraft, and returned to academia earlier this year after a stint with the National Highway Traffic Safety Administration, which oversees automotive technology, including Tesla’s autopilot and self-driving cars. Cummings’ insight could help politicians and lawmakers trying to weigh the promise of much-hyped new algorithms against the risks that lie ahead.

Cummings told me this week that he left NHTSA with a sense of deep concern about the autonomous systems being implemented by many automakers. “We’re in serious trouble as far as the capabilities of these cars,” Cummings says. “They’re not even close to being as capable as people think they are.”

I was struck by the parallels with ChatGPT and similar chatbots that were generating excitement and concern about the power of AI. Automated driving features have been around longer, but like long-language models, they are based on machine learning algorithms that are inherently unpredictable, difficult to inspect, and require a different kind of engineering thinking than in the past.

Also like ChatGPT, Tesla’s Autopilot and other self-driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution have led automakers, start-ups and investors to pour huge sums of money into developing and implementing a technology that still has many unresolved problems. There was a permissive regulatory environment around self-driving cars in the mid-2010s, with government officials reluctant to rein in a technology that promised to be worth billions to American companies.

After billions were spent on technology, self-driving cars still struggle, and some auto companies have canceled large self-driving projects. Meanwhile, as Cummings says, it’s often unclear to the public how capable semi-autonomous technology really is.

In a sense, it’s good to see governments and legislators rushing to suggest regulation of generative AI tools and long language models. The current panic is centered around great language models and tools like ChatGPT that are remarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts.

At this week’s Senate hearing, OpenAI’s Altman, who gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his can work on advanced AI. “My worst fear is that we, the field, the technology, the industry, will cause significant damage to the world,” Altman said during the hearing.


Scroll to Top