Of all the emotions I was expecting from this week’s Google I/O keynote, “horror” was not on my short list. Google has spent a couple of decades creating products that make my life easier. This week, he demonstrated the exact opposite: a way to make communication with other people worse.
Google I/O this year focused on artificial intelligence, and one of its centerpieces was ad-based text generation, particularly in Google’s office suite. At various points throughout the keynote, a system called “Help Me Type” was demonstrated, which rewrites simple prompts in Gmail and other apps as more polished paragraphs of text. But more polished does not mean automatically better.
Good writing can delight us with an intriguing turn of phrase. It can tell us something about how a person thinks and feels. Or it can offer some kind of useful detail about the world. And there’s no reason AI can’t play a role. I have seen authors use generative tools to create unexpected prose or summarize data in a readable way. I’m excited to see people explore new uses for it.
But the ideas on the Google stage were almost impressive in their ability to serve precisely none of the main purposes of writing. His Help Me Write examples calcify the worst tendencies of “professional” communication. They are a bland, bloated model that turns a human ad into something that uses more words to say less.
Let’s look at one of the examples. At the beginning of the presentation, a presenter automatically generates a job description for a textile design position that (in the demo setting) sends out to the world. Later, another presenter answers and provides the following message:
interested! 8 years of experience with international brands. Call Tuesday.
Help Me Write works its magic and produces the following:
Hello Lory,
Thank you very much for contacting you! I am very interested in the position of Senior Textile Designer at Cymbal. I have 8 years of experience working with international brands and I am confident that I have the skills and experience you are looking for.
I am available for a call on Tuesday at your earliest convenience. Please let me know what time it works.
The latter version is obviously the way you’re expected to apply for a job because, written by a human, it suggests you’re committed enough to write a multi-paragraph response and capable of reaching certain cultural and class signifiers. However, an AI-generated response serves neither of those purposes. It was built almost instantly based on a predictive text system that requires minimal English language proficiency, and so far that system can only handle fairly formulaic writing; it will not replace human labor in many real world situations. So all it does is build a longer, more forced version of the original flag, one that probably only has value until everyone expects it to have been written with AI.
What’s worse, the AI generation reinforces the idea that overzealous American business talk is the required manner of writing, regardless of whether it is a necessary skill for the job. I’ve seen thoughtful stories about people with dyslexia using ChatGPT to produce text that is: as a Washington Post the article puts it — “unfailingly professional and courteous.” But there is a simpler, more unspoken alternative: being willing to accept wider variations in the way people communicate. I don’t envy anyone using AI writing to meet largely arbitrary standards, but on a societal level, it’s a linguistic arms race to a duller future.
For another example, try this friendly congratulatory message from Google’s VP of Android Engineering Dave Burke to SVP of Devices and Services Rick Osterloh:
rick congratulations on the amazing launch of pixel fold with android dave
This is what helps to write:
Dear Rick,
Congratulations on the successful launch of the Pixel Fold! I am really impressed with the device and how well it integrates with Android. The folding screen is a game changer and I can’t wait to see what you do with it next.
Better,
dave
I’m going to assume this is ironic because I know the way executives send personal emails and they seem completely comfortable. with single-sentence missives without punctuation. But then again, it’s worse! He tells me less about how the sender sounds. It takes longer to read. Makes the sender sound like a switchable robot.
To understand the pragmatic value of this process right now, you need to look at an email from the beginning of the main presentation. The core of the generation is really cool: it takes a very simple message for an airline refund request and populates it with concrete details pulled from your inbox, like name and flight date. But then he embellishes the facts with a few paragraphs of angry prose. Here is the original:
request a full refund for this canceled flight
And the AI generated version:
Dear Customer Service,
I am writing to request a full refund for my canceled flight, DF 4542, on February 4, 2023. I understand that you are offering a voucher as a goodwill gesture, but I would prefer to be reimbursed for the cost of my ticket.
I have been a loyal customer of your airline for many years and have always been satisfied with your service. However, I am very disappointed in the way my last flight was handled. I was not notified in advance of the cancellation and was forced to make alternative arrangements last minute. This caused me a lot of inconvenience and expense.
I believe a full refund is the only fair way to compensate me for the problems I experienced. I have attached a copy of my original ticket for your reference.
Thanks for your time and consideration.
Sincerely,
Rowan
There is no indication that these angry claims are accurate. he is a bureaucrat deimatic display: Convince one hapless customer service rep that you’re crazy enough to write all that text and you’ll probably be a real pain in the ass until you get your money back. I’ve seen this idea pitched best on Tumblr of all places. In a popular AI post, someone described using ChatGPT to carry out a kind of high-tech version of Brazil‘s Form 27B/6 gambit, generating a false legal threatening letter for a landlord who was breaking housing laws. As a fellow longtime renter, I applaud you.
But this material is effective in a brief transition period while generative text is not in widespread use, while readers are likely to assume it is connected to a human writer. If you know it is a machine, the illusion evaporates. You’re left with a world full of communications that are longer, less carefully crafted, and more boring to read. I’d rather hire someone on the basis of honest “8 years experience” than a cover letter full of empty automated prose.
By contrast, Google’s most useful example of Help Me Write was simply conveying information. In an email about a potluck, your AI was able to view a document with a list of dishes people had signed up for takeout, and then summarize that list as one line in an email. It saves writers the step of pasting a series of items and readers the inconvenience of clicking through another tab. More importantly, its value isn’t based on pretending a human wrote it, and if Google has its way, that’s a trick that won’t last long.