AI Ethics for Advertising Professionals: 10 Rules of Engagement

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more

AI is not coming to the workplace. It’s here. Many of us already use tools that have AI under the hood, and both Google and Microsoft recently announced AI versions of their search engines. It’s the power of AI for people – no specialized training needed.

AI offers tremendous potential for advertising, particularly for email writing, research, composition generation, and social copywriting, as well as HR functions like recruiting, reviews, and more.

Advocates will tell you that AI in the workplace will take care of mundane tasks, freeing us to connect with other humans, be creative and chill out. Naysayers will remind you that AI could amplify bias, expand surveillance, threaten jobs, and cause a host of other affairs.

Both groups are right. AI is a tool, and what happens next depends on how we use it. Unfortunately, the regulatory landscape has not kept pace with technology. This leaves it mostly in the hands of us to make decisions about how to use AI. In my role in brand strategy at a creative agency, I’ve already seen people debating these options: Is it okay to use ChatGPT to write a peer review? What about generating AI mockups for a presentation?


transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.

Register now

We urgently need to define the etiquette around AI in the workplace. There are dense pieces of AI regulation and codes of ethics for engineers, but we lack easy and accessible guidelines for white-collar professionals who are rapidly adopting these tools. I want to propose the following guidelines for the use of AI in the workplace.

10 rules for advertising professionals using AI at work

1. Disclose the use of AI

A litmus test of whether you ought using AI for something is whether you would feel comfortable admitting it. If you have no qualms (“I generated statistics for our report”), that’s a better use case. If you’d feel embarrassed (“Hello trainee, your performance review was written by ChatGPT”), that’s a good indication that you shouldn’t. People will have different tolerances, but being transparent will help us to openly discuss what is acceptable.

2. Be responsible

AI has a reputation for “hallucinating”, essentially auto-filling in false information. Google Bard recently gave an inaccurate answer in his public manifestationand Microsoft Bing were criticized for “gaslighting” users. Whether it’s factual inaccuracies or poorly written emails, we can’t turn AI errors into someone else’s problem. Even if it’s the AI’s job, it’s our responsibility.

3. Share AI inputs

With AI, you get what you put in. Being transparent about input will help all of us learn how to best use these tools. It will also help us resist the urge to ask for blatantly biased results (“Tell me why Millennials are selfish”) or use AI to plagiarize (“Give me a Kehinde Wiley-esque photo”). Transparency encourages us to only design directions that we would be proud to display.

4. Look for context

AI is very good at retrieving and simplifying information. For those of us whose jobs involve research, this can eliminate the process of sifting through dozens of sites for a simple answer. But you can also remove the complexity. We run the risk of ceding power to an invisible authority and receiving summaries instead of nuanced perspectives. We must complement the simple results generated by AI with our own research and critical thinking.

5. Transparency of the bidding system

As companies use AI to make more decisions, people have a right to know how systems drive their results. The GDPR requires companies to disclose “significant information about the logic involved” in automated decisions, but the US lacks that information. protections. If a company uses an artificial intelligence program to recommend raises and bonuses, employees need to know what factors it considers and how it weighs them.

6. Provide resource

A company came under scrutiny after allowing an AI-based productivity tool to fire 150 employees via email without human intervention. Later, the company said it would manually review each employee’s information. case. We need to be able to challenge the results of AI, not assume it “knows it all”, and gain access to a human-led resource system.

7. AI audit for bias

One of the main criticisms of AI is that it can amplify bias. ChatGPT has been known to write “grossly sexist (and racist)” performance reviews, even when given generic comments. tickets. There is a record of racial and gender bias in AI-powered hiring toolswho are often trained in data sets full of humans inclination. Companies should regularly audit their tools, and individual users should be diligent about biases in the results.

8. Reassess time

Another risk of AI: we spend less time with humans and more time with machines. If AI creates efficiency, what are we filling our new time with? Instead of doing more work by default, we need to fundamentally rethink this new bandwidth. The most meaningful use of that time could be to connect with colleagues, pursue an amazing creative idea, or simply rest.

9. Prioritize humanity

There will be times when AI offers efficiency gains at the cost of human dignity. There are companies that have implemented AI-powered monitoring where workers cannot take their eyes off a screen. Some ad agencies are already using AI to replace visual advertising. artists. I would implore leaders to put human well-being first for purely ethical reasons, but companies can also discover that there are tangible benefits to taking the high road, just as companies that pay higher salaries often benefit from more stable and experienced companies. . labour.

10. Advocate for protections

The vast majority of leaders already plan to use AI to reduce hiring needs. AI models continue to learn from the jobs of the unpaid creators. And most of us don’t have the power to combat bias in these tools. There are lots of little things we can do to use AI more ethically in the workplace, but ultimately, we need codified structural change and elected leaders who promise to build a stronger regulatory landscape.

The way forward for AI in advertising

Just as the Internet changed what it meant to work in advertising, AI is about to radically change many functions of our jobs. There will be benefits. There will be drawbacks. There will be changes that we can’t even imagine yet. As AI advances exponentially, we must be prepared for it.

Ethics is a subjective subject, and I am not proposing this list as a set of commandments set in stone. My goal is to open a dialogue about how the advertising industry can harness the incredible power of AI while mitigating its risks. I hope agencies and individuals will pick up the slack and start discussing what we want responsible AI adoption for our industry to look like.

Hannah Lewman is Associate Director of Strategy at Mekanism.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers


Scroll to Top