CNET's AI-generated content controversy highlights the importance of factual accuracy and ethical concerns related to AI usage. Brands need to be vigilant, establish clear AI policies, and maintain transparency with their audience.
Did you know 78% of US adults perceive news articles written by AI to be a step in the wrong direction as it could lead to the spread of misinformation?
With a growing demand for factual accuracy and ethical use of content, it’s no surprise that AI content has been in the news for quite some time. Remember the copyright lawsuits that were initiated over AI-generated art? Or the much-talked-about Writers Guild strike in the US?
One case that might have slipped through the cracks is CNET, a leading tech media platform. The company went under extensive public and media scrutiny after the discovery of factual inaccuracies in their AI-generated articles. A precipitous moment that led to the introduction of an elaborate AI policy and revisions for multiple stories - months after they were originally written by AI.
Let’s look at what transpired.
It started in January 2023, when an online marketer tweeted about CNET Money using “automation technology” to create their news article. It was picked up by a tech site, Futurism, which reported that CNET had been “quietly publishing entire articles generated by AI.” It also pointed out “dumb errors” made in these articles, which a (human) expert would not have made.
For instance, an article on compound interest, titled “What is Compound Interest?,” mentioned that a principal deposit of $10,000 with 3% interest would earn you $10,300 in a year.
This is factually incorrect.
The earnings from interest would be $300, while $10,300 would be the total value of your principal amount plus interest. However, the way this article was structured, readers who do not understand the concept of compound interest would have got the wrong notions.
This factual error was just one of the many instances found in over 70 AI-generated articles published by CNET Money, between November 2022 and January 2023.
Other stories included titles like, “Should You Break an Early CD for a Better Rate?” and “What is Zelle and How Does It Work?”
While we believe that there’s nothing wrong with adopting new technology and experimentation, CNET tried to hide this fact.
Until the news broke, readers were unaware that they were reading AI-generated content as the byline was attributed to “CNET Money Staff.” The usage of AI was only revealed on reading the author's bio, which was not directly visible on the screen.
Needless to say, there was public outcry when this news got out.
In addition to reports about factual inaccuracies detected in the AI content, there were questions raised about the ethical usage of AI content.
When an independent agency ran the articles through AI content detectors, they found that over 87% of CNET’s AI-generated content was detectable with a publicly available tool and only 12.8% had zero detectable AI content.
Following the reports and tweets, CNET had to take measures to address the backlash for the AI-generated content:
For example, one of their tenets on AI usage states, “If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.”
With the rise of AI technology, over 90% of enterprises have already adopted at least one AI technology. This has sparked concern in over 71% of employees regarding job security and the risk of being replaced. This widespread apprehension highlights the need for companies to approach AI tool integration with sensitivity and foresight.
If you are planning to use AI tools to create content, here are three takeaways from the CNET controversy:
What are your thoughts on the use of AI in journalism?