The AI Is Coming From Inside the House

Credit: Adobe Firefly

It’s a story as old as… well, about a year.

Since the launch of ChatGPT in November 2022, there’s been a disturbingly regular series of incidents where an online publisher has been caught publishing low-quality articles written in whole or in part by generative AI, often without disclosing the AI’s role. Many of these stories were broken by Futurism, which has become the de facto content cop walking the GenAI beat.

The latest of these happened earlier this week, involving poorly written articles that were attributed to writer profiles that were clearly fake, complete with AI-generated headshots. Although the pattern was familiar, what elevated this particular incident to a new level of notoriety was the offending publication: Sports Illustrated, one of the few media brands that you might un-ironically call venerable. Although it’s changed hands a few times in recent years, SI has a long history of solid sports journalism, and even published articles by novelist William Faulkner more than once.

The fall of once-revered brands to mere licensing opportunities harvested by the likes of the Arena Group (which owns SI) is simply a reality of today’s media market, and one I leave to Brian Morrissey to expertly unpack. But the veneer of legitimacy that Sports Illustrated has clearly still matters, at the very least to its own editorial staff, whose union put out a statement condemning what the publication did and asserting that it’s not reflective of the standards their reporters and editors adhere to.

It’s worth noting here that the Arena Group says the content in question was sourced from a third party, AdVon Commerce, and AdVon claimed the articles were written by humans, not AI, but that the bylines were faked to protect those writers’ privacy.

Even if there’s some truth to the notion that humans wrote the content, the deception doesn’t exactly make SI, AdVon, or Arena look good. Bylines exist for transparency and attribution; while pen names are sometimes OK, their pseudonymous nature should be clear (my former employer, CoinDesk, has an excellent policy about the use of pseudonyms in bylines and sourcing).

An Arena Group spokesperson sent me a statement that said the practice of faking bylines is one they “strongly condemn” and that they’d remove the content, launch an investigation, and end the partnership with AdVon.

Subscribe now

Not a Job for Robots

While this scandal is nowhere near as juicy as everything that’s happened at OpenAI recently, this episode has the potential to be a watershed moment for media, since it involves such a well-regarded brand. Even though the events are regrettable, the opportunity for a teachable moment should be welcomed, and the lessons for other publications go beyond simply “don’t do that.”

First, let’s assume the articles were AI-written, at least in part (even if you accept AdVon’s word, it’s possible the company was fooled by desperate writers trying to save time via ChatGPT or some other tool). Using GenAI in content isn’t, in and of itself, a no-no. The problem began with relying on AI for a knowledge task — i.e. generating “original” content from its training data — rather than adapting existing content (a language task). Once you’re in knowledge-task territory, you can’t avoid vetting the content with either an experienced editor or a subject matter expert (i.e. “human in the loop”). As CNET discovered from one of the first Futurism exposés of this kind, simply assigning junior staffers with no training in editing AI-generated content isn’t enough.

Then there’s the transparency question. Obviously, no publication should publish fake bylines. And assuming generative AI contributed substantively to the actual content in these articles, that should have been disclosed — the cover-up is always worse than the crime. When another Arena Group brand, Men’s Journal, began the practice of using GenAI to contribute to articles, it actually did a pretty good job of this.

But the transparency point goes even further. While the Arena Group very publicly announced its foray into generative content, I can find no mention of a formal policy about AI-generated content anywhere on its site or Sports Illustrated’s.

Subscribe now

Citation Clarity Needed

If this incident has any takeaway for publishers broadly, it’s this: If you don’t have a policy on the role of generative AI in content, you need one. And now. Certainly, even if there was such a policy, it’s entirely possible this incident would still have happened. The nature of the articles (evergreen, commerce-driven pieces sourced from a third party) suggests they don’t get the same level of scrutiny as the stuff that appears on the SI homepage. The content may not even have been touched by any the people the union represents.

But a policy on AI content would have forced every department involved in content to look at their processes to ensure they were in compliance. And if unauthorized generative content still slips through, the policy would be the thing to fall back on when doing the post-mortem and figuring out which holes to patch and what precautions to take.

In short, if you’re a publisher — and you want to avoid slipping on a generative banana peel, not to mention becoming Futurism’s next target — you need a generative AI policy that sets expectations for both your readers and your staff. Because there are people in your content workflow using GenAI right now, guaranteed, either for the content itself or for processes around it, and that usage can only increase. Every day you delay in telling them what is and isn’t okay is another day a generative scandal has a chance to escape the building.


The Media Copilot also offers consulting. If you need guidance on your policy on generative AI or would like practical input on how to apply the technology, please reach out at pete@petepachal.com to schedule a time.

Like this post? Why not check out The Media Copilot podcast (listen on your favorite platform here, or via Apple Podcasts below)

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.