How the digital advertising industry can guide the ways in which AI transforms businesses

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more

When the Microsoft-funded OpenAI Lab launched ChatGPT in February, millions of people realized almost overnight what tech professionals have long understood: Today’s artificial intelligence tools are powerful enough. advanced enough to transform daily life, as well as an incredibly wide range of industries. Microsoft’s Bing jumped from a distant second in search at a much higher profile level. Concepts like extensive language models (LLM) and natural language processing are now part of the mainstream discussion.

However, with the spotlight also comes scrutiny. Regulators around the world are taking note of the risks of AI to user privacy. The Elon Musk-backed Future of Life Institute amassed 1,000 signatures from tech leaders calling for a six-month hiatus on AI tools training which are more advanced than GPT-4, which powers ChatGPT.

As heady as engineering and legal issues can be, basic ethical issues are easily digestible. If developers need to take a summer vacation to work on AI advancements, will they shift their focus to making sure the AI ​​respects ethical guidelines and user privacy? And at the same time, can we control the potentially disruptive effects that AI can have on where ad dollars are spent and how media is monetized?

Google, IBM, Amazon, Baidu, Tencent, and a variety of smaller players are working on, or already launching, in Google’s case, similar AI tools. In an emerging market, it is impossible to predict which products will come to dominate and what the results will be like. This underscores the importance of protecting privacy in AI tools right now: planning for the unknown before it happens.


transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.

Register now

As the digital advertising industry eagerly pursues AI applications for targeting, measurement, creative personalization and optimization and more, industry leaders will need to take a close look at how the technology is deployed. Specifically, we will need to look at the use of personally identifiable information (PII), the potential for accidental or intentional bias or discrimination against underrepresented groups, how data is shared through third-party integrations, and global regulatory compliance.

Search vs. IA: The great reallocation of expenses?

When it comes to ad budgets, it’s easy to imagine what a “search vs. AI” showdown would look like. It’s very convenient to have all the information you’re looking for gathered in one place via AI instead of rephrasing search queries and clicking links to focus on what you’re really looking for. If we see a generational shift in the way users discover information—that is, if young people accept AI as a central part of the digital experience in the future—non-AI search engines risk losing out. relevance. This could have a huge impact on the value of search inventory. and the ability of publishers to monetize search traffic.

Search continues to drive a significant portion of the traffic to publisher sites, even with publishers continuing to move to build audience loyalty through subscriptions. And now that advertising is making its way into AI chat, Microsoft, for example, has been testing ad placement in bing chat — Publishers wonder how AI vendors can share revenue with the sites their tools source information from. It’s safe to say that publishers will be looking for another set of data black boxes from walled gardens that they depend on for revenue. To thrive in this uncertain future, publishers need to lead conversations to make sure industry stakeholders understand what we’re rushing into.

Develop processes with privacy in mind

Industry leaders need to be vigilant about how they and their technology partners collect, analyze, store, and share data for AI applications across their processes. The process of obtaining explicit user consent to collect their data and providing clear opt-out options must occur at the beginning of an interaction with AI chat or search. Leaders should consider implementing consent or opt-in buttons with AI tools that personalize content or advertising. Despite the convenience and sophistication of these AI tools, the cost simply cannot be paid with privacy risks for users. As industry history has shown, we should expect users to become increasingly aware of these privacy risks. Companies should not rush the development of consumer-facing AI tools and jeopardize privacy in the process.

At this point, with the AI ​​tools of Big Tech companies getting the most attention, we should not be fooled by a false sense of security that the effects of these developments will be Big Tech’s problem. The recent layoffs we’ve seen at major tech companies are causing a large dispersion of talent, which, in turn, will lead to breakthroughs in AI coming from smaller companies that have recruited talent. And, for publishers who don’t want to work within another walled garden to survive, there’s an extra level beyond the crucial level of privacy where their best business interests are at stake. Industry leaders need to treat the rise of AI chat for the pivotal moment that it is.

Let’s take this opportunity to prepare for a profitable, transparent and privacy-safe future.

Fred Marthoz is vice president of global partnerships and revenue at Lotame.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers


Scroll to Top