CNET published AI-generated stories. Then his staff pushed back

In November, venerable The technological medium CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, which turned out to be riddled with errors. Today, the human members of its editorial team have unionized and called on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.

“In this time of instability, our various content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decision-making process, especially as automated technology threatens our jobs and reputations,” says the company. CNET’s mission statement. Media Workers Union, whose more than 100 members include writers, editors, video producers and other content creators.

While the organizing effort began before CNET management began its implementation of AI, its employees could become one of the first unions to force their bosses to set up barriers around the use of content produced by generative AI services. like ChatGPT. Any deal struck with CNET’s parent company, Red Ventures, could help set a precedent for how companies approach technology. Multiple digital news outlets have cut staff recently, with some like BuzzFeed and Sports Illustrated while embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.

In Hollywood, AI-generated writing has sparked a worker uprising. The striking writers want studios to agree to ban AI authoring and never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advances. CNET’s writers and staff are represented by the Writers Guild of America.

While CNET bills itself as “your guide to a better future,” the 30-year-old publication late last year stumbled awkwardly into the new world of generative AI that can create text or images. In January, the science and technology website Futurism revealed that in November, CNET had quietly started posting AI-created explanations like “What is Zelle and how does it work?” The stories were published under the headline “CNET Money Staff” and readers had to hover over it to know that the articles had been written “using automation technology.”

A torrent of embarrassing revelations followed. the edge reported that more than half of AI-generated stories contained factual errors, prompting CNET to issue sometimes long corrections in 41 of its 77 articles written by bots. The tool the editors used also appeared to have plagiarized competitor media work, as generative AI often does.

Then-editor-in-chief Connie Guglielmo later wrote that a plagiarism detection tool had been misused or failed and that the site was developing additional checks. A former staff member demanded had her signature removed from the site, concerned that AI would be used to update her stories in an effort to drive more traffic from Google search results.

In response to the negative attention to CNET’s AI project, Guglielmo posted a article saying that the outlet had been testing an “in-house designed AI engine” and that “AI engines, like humans, make mistakes.” Nonetheless, he promised to make some changes to the site’s disclosure and citation policies and to go ahead with his experiment in robot authoring. In March, she stepped down as editor-in-chief and now heads the outlet’s AI editing strategy.


Scroll to Top