Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
Much has been written about the dangers of generative AI in recent months, and yet all I’ve seen boils down to three simple arguments, none of which reflect the increased risk I see coming our way. Before we get into this hidden danger of generative AI, it will be helpful to summarize the common warnings that have been doing the rounds recently:
- The risk to jobs: Generative AI can now produce human-level work products ranging from artwork and essays to scientific reports. This will be to a large extent impact the job market, but I think it’s a manageable risk as job definitions adapt to the power of AI. It will be painful for a period, but not unlike how previous generations have adapted to other labor-saving efficiencies.
- Risk of false content: Generative AI can now create human-grade artifacts at scale, including false and misleading articles, essays, documents and videos. Disinformation is not a new problem, but generative AI will allow it to be mass-produced at levels never seen before. This is a significant risk, but manageable. This is because fake content can be identified by either (a) requiring watermarking technologies that identify AI content at creation time, or (b) implementing AI-based countermeasures that are trained to identify content. of AI after the fact.
- Risk of intelligent machines: Many researchers are concerned that AI systems will scale up to a level where they develop a “will of their own” and take actions that conflict with human interests, or even threaten human existence. I think this is a real long-term risk. In fact, I wrote to “Illustrated book for adults” titled arrival mind For a few years now, he has been exploring this danger in simple terms. Still, I don’t think current AI systems will spontaneously become sentient without major structural improvements in the technology. So while this is a real danger for the industry to focus on, it’s not the most pressing risk that I see before us.
So what worries me most about the rise of generative AI?
From my perspective, where most security experts, including policymakers, go wrong is that they view generative AI primarily as a tool for creating traditional content at scale. While the technology is quite adept at producing articles, images, and videos, the bigger problem is that generative AI will unleash a whole new form of media that is highly personalized, fully interactive, and potentially far more manipulative than any form of targeted content. that we can. have faced to date.
Welcome to the age of interactive generative media
The most dangerous feature of generative AI is not that it can generate large-scale fake articles and videos, but rather that it can produce interactive and adaptive content tailored to individual users to maximize persuasive impact. In this context, interactive generative means It can be defined as targeted promotional material that is created or modified in real time to maximize influence goals based on personal data about the receiving user.
This will transform “targeted influence campaigns” from pellets aimed at broad demographics to heat-seeking missiles that can target individual people for optimal effect. And as outlined below, this new form of media is likely to come in two powerful flavors, “targeted generative advertising” and “targeted conversational influence.”
Targeted Generative Advertising is the use of images, videos, and other forms of informational content that look and feel like traditional ads but are personalized in real time to individual users. These ads will be created on the fly using generative AI systems based on influencer goals provided by third-party sponsors in combination with the personal data accessed for the specific user they are targeting. Personal data may include a user’s age, gender, and educational level, combined with their interests, values, aesthetic sensibilities, purchasing tendencies, political affiliations, and cultural biases.
In response to influencer goals and targeting data, generative AI will personalize design, featured images, and promotional messages to maximize effectiveness for that user. Everything down to the colors, fonts, and punctuation could be customized along with the age, race, and clothing style of anyone shown in the images. Will you watch video clips of urban scenes or rural scenes? Will it be set in fall or spring? Will you see images of sports cars or family vans? Every detail can be customized in real time using generative AI to maximize the subtle impact on you personally.
And because tech platforms can track user engagement, the system will learn which tactics work best on you over time, figuring out the hair colors and facial expressions that best grab your attention.
If this sounds like science fiction, consider this: Both Goal and Google have recently announced plans to use generative AI in creating online ads. If these tactics produce more clicks for sponsors, they will become standard practice and an arms race will ensue, with all the major platforms competing to use generative AI to personalize promotional content as effectively as possible.
this leads me to managed conversational influencea generative technique in which influencing targets are transmitted through interactive conversation instead of traditional documents or videos.
Conversations will occur via chatbots (such as ChatGPT and Bard) or through voice-based systems powered by similar Long Language Models (LLMs). Users will find these “conversational agents” many times throughout the day, as third-party developers will use the APIs to integrate LLMs into their websites, apps, and interactive digital assistants.
For example, you can access a website to find the latest weather forecast, strike up a conversation with a AI agent to request the information. In the process, you could be subject to conversational influence: subtle messages woven into dialogue for promotional purposes.
As conversational computing becomes commonplace in our lives, the risk of conversational influence it will be greatly expanded as paying sponsors could inject messages into the dialogue that we might not even notice. And just like targeted generative ads, messaging targets solicited by sponsors will be used in combination with personal data about the target user to optimize impact.
The data could include the user’s age, gender, and education level combined with personal interests, hobbies, values, etc., enabling real-time generative dialogue designed to optimally engage that specific person.
Why use conversational influence?
If you’ve ever worked as a salesperson, you probably know that the best way to persuade a customer is not to hand them a brochure, but to engage in face-to-face dialogue so you can introduce the product, listen to their reservations, and adjust your arguments as necessary. It’s a cyclical launch and adjustment process that can “talk” them to a purchase.
While this has been a purely human ability in the past, generative AI can now perform these steps, but with greater skill and deeper knowledge to draw from.
And while a human salesperson has only one person, these AI agents will be digital chameleons that he can adopt any style of speaking, from nerdy or folksy to suave or hip, and can follow any sales tactic, from befriending the customer to exploiting his fear of missing out. And because these AI agents be Armed with personal details, they might mention the right music artists or sports teams to facilitate a friendly dialogue.
Additionally, technology platforms could document how well previous conversations worked in persuading you, learning which tactics are most effective for you personally. Does it respond to logical appeals or emotional arguments? Looking for the biggest bargain or the highest quality? Are you convinced by time pressure discounts or free add-ons? Platforms will quickly learn to pull all your strings.
Of course, the great threat to society is not the optimized ability to sell you a pair of pants. The real danger is that the same techniques are used to push propaganda and misinformation, convincing you of false beliefs or extreme ideologies that you would otherwise reject. A conversational agent, for example, might be targeted to convince you that a perfectly safe drug is a dangerous plot against society. And because AI agents will have access to an information-filled internet, they could select evidence in ways that would overwhelm even the most knowledgeable human.
This creates an asymmetrical balance of power often called the AI tamper issue in which humans are at an extreme disadvantage, conversing with artificial agents that are highly skilled at engaging us, while we do not have the ability to “read” the true intentions of the entities we are talking to.
Unless regulated, targeted generative ads and targeted conversational influence will be powerful forms of persuasion where users will find themselves outmatched by a dull digital chameleon that offers no insight into their thought process, but is armed with a host of of data about our tastes, desires and personal tendencies. and you have access to unlimited information to fuel your arguments.
For these reasons, I urge regulators, policymakers, and industry leaders to focus on generative AI as a new form of interactive, adaptive, personalized, and deployable media at scale. Without meaningful protections, consumers could be exposed to predatory practices ranging from subtle coercion to outright manipulation.
louis rosenbergDoctor., is an early pioneer in the fields of VR, AR, and AI and the founder of Immersion Corporation (IMMR: Nasdaq), Microscribe 3D, Outland Research, and Unanimous AI.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers