Fear the fire or harness the flame: the future of generative AI

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more


Generative AI has conquered the world. So much so that in recent months, technology has twice been a major feature on CBS’s “60 Minutes.” The rise of amazing chatbots like ChatGPT has even prompted warnings of out-of-control technology from some artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive – perhaps stunning would be a better adjective – it could be even more advanced than is generally believed.

This week, The New York Times reported that some researchers in the tech industry believe that these systems have moved toward something that cannot be explained as a “stochastic parrot”: a system that simply mimics its underlying data set. Instead, they are looking at “an artificial intelligence system that is generating human-like responses and ideas that were not programmed into it.” This observation comes from Microsoft and is based on responses to their OpenAI ChatGPT prompts.

His point of view, as expressed in a research work published in March, is that the chatbot showed “sparks of artificial general intelligence” (AGI) — the term for a machine that reaches the ingenuity of human brains. This would be a significant development, as most think AGI is still many years, possibly decades, in the future. Not everyone agrees with his interpretation, but Microsoft has reorganized parts of its research labs to include various groups dedicated to exploring this AGI idea.

improvising memory

Separately, Scientific American described several similar research results, including one by Columbia University philosopher Raphaël Millière. He wrote a program in ChatGPT and asked it to calculate the number 83 in the Fibonacci sequence.

Event

transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.

Register now

“It’s multi-step reasoning to a very high degree,” he said.

The chatbot did it. It shouldn’t have been able to do this since it’s not designed to manage a multi-step process. Millière hypothesized that the machine cobbled together a memory within its network layers, another AGI-style behavior, to interpret words based on their context. Millière believes that this behavior is much like how nature reuses existing capabilities for new functions, such as feather evolution for isolation before they were used for flight.

AI keep walking

Arguably already showing the first signs of AGI, developers continue to push forward with large language models (LLMs). Late last week, Google announced significant improvements to its Bard chatbot. This update included moving Bard to the new PaLM 2 large language model. per CNBC report, PaLM 2 uses nearly five times more training data than its 2022 predecessor, allowing you to perform more advanced coding, math, and creative writing tasks. Not to be outdone, this week OpenAI began offering plugins for ChatGPT, including the ability to access the Internet in real time instead of solely relying on a dataset with content through 2021.

At the same time, Anthropic announced an expanded “context window” for its Claude chatbot. by LinkedIn mail From artificial intelligence expert Azeem Azhar, a context window is the length of text that an LLM can process and respond to.

“In a sense, it is like the ‘memory’ of the system for a given analysis or conversation,” Azhar wrote. “Larger context windows allow systems to have much longer conversations or analyze much larger and more complex documents.”

According to this post, Claude’s window is now about three times larger than ChatGPT’s.

All of which is to say that if ChatGPT exhibited sparks of AGI in research several months ago, the state of the art has already surpassed these capabilities. That said, there are numerous shortcomings left in these models, including the occasional hallucinations where they simply make up answers. But it is the speed of progress that has spooked many and led to urgent calls for regulation. However, Axios reports the likelihood that US lawmakers will band together and act on AI regulation before the technology develops rapidly remains slim.

Existential risk or fear of the unknown?

Those who see an existential danger in AI worry that AI could destroy democracy either humanity. This group of experts now includes Geoffrey Hinton, the “godfather of AI,” along with longtime AI doomsayers like Eliezer Yudkowsky. The latter said that by building a superhumanly intelligent AI, “literally everyone on Earth will die.”

While their outlook isn’t all that dire, even executives at major AI companies (including Google, Microsoft, and OpenAI) have said they believe AI regulation is necessary to prevent potentially harmful outcomes.

In the midst of all this angst, Casey Newton, author of Platformer Newsletter, recently wrote about how you should approach what is essentially a paradox. Should your coverage emphasize the hope that AI is the best of us and will solve complex problems and save humanity, or should it talk about how AI is the worst of us, obfuscating the truth, destroying trust, and ultimately instance, humanity?

Some believe that the concerns are exaggerated. Instead, they view this response as a reactionary fear of the unknown, or what amounts to technophobia. For example, the essayist and novelist Stephen Marche wrote in The Guardian that “technological doomerism” is a “sort of exaggeration”.

He blames this in part on the fears of engineers who build the technology but “just have no idea how their inventions interact with the world.” Marche dismisses concerns that AI is about to take over the world as anthropomorphism and storytelling; “It’s a movie that plays in the collective mind, nothing more.” Demonstrating how enslaved we are to these issues, a New movie expected this fall, “pits humanity against AI forces in a planet-devastating war for survival.”

Strike a balance

A common sense approach was expressed in an opinion piece from Professor Ioannis Pitas, President of the International AI PhD Academy. Pitas believes that AI is a necessary human response to a global society and an increasingly complex physical world. He sees that the positive impact of AI systems greatly outweighs its negative aspects if the proper regulatory measures are taken. In his opinion, AI must continue to develop, but with regulations to minimize the already evident and potential negative effects.

This is not to say that there are no dangers ahead with AI. Alphabet CEO Sundar Pichai has said: “AI is one of the most important things humanity is working on. It is deeper than electricity or fire.”

Perhaps fire provides a good analogy. There have been many mishaps in handling fire, and these still happen occasionally. Fortunately, society has learned to reap the benefits of fire while mitigating its dangers through standards and common sense. The hope is that we can do the same with the AI ​​before the AGI sparks burn us out.

Gary Grossman is Senior Vice President of Technology Practices at edelmann and global leader of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers

source

Scroll to Top