A New Year’s Resolution for Newsrooms: Clear GenAI Policy

Credit: DALL-E

I’ve been talking about how generative AI can be applied to journalism today on This Week in Digital Media’s Subtext this week (Subtext is a broadcast-via-text service). Subscribers are able to respond to the broadcasts, and it’s been great to hear the myriad perspectives on the state of AI in newsrooms today.

In response to a question I asked the group, a respondent alerted me to NPR’s public-facing policy regarding the use of generative AI in reporting. It’s brief, but it shows that a newsroom policy doesn’t have to be long or all-inclusive to provide clarity. It also explicitly forbids journalists from inputting any intellectual property — including their own notes — into an AI chatbot or service that might use that data to train or refine its models.

Subscribe now

You could debate whether that’s sound policy or overcautious (I’d definitely lean to the former), but the point is clarity — the editorial staff knows what the rules and best practices are. This is something the industry needs a lot more of, and soon, if the results of a Reuters Institute survey on the state of newsrooms has anything to say about it.

The Institute’s annual Changing Newsrooms report touches on several subjects, but as you’d imagine for a study that looks at major trends affecting the news in 2023, AI factors heavily. The report surveyed 135 “senior industry leaders” in 40 countries. From the responses, it appears that newsrooms with policies in place with respect to GenAI are distinctly in the minority (see chart below).

A mere 29% of newsrooms said they had implemented “high-level principles” about using GenAI. That’s strikes me as fairly low; I would have expected more urgency to create policies after so many egg-on-face incidents where various publications — wittingly or unwittingly — let generative content into the wild to disastrous results. Perhaps the Sports Illustrated debacle, which led to several executives losing their jobs, will accelerate things.

As that infamous incident made clear, the consequence of not having a clear GenAI policy means there will be more opportunities for generative content — which can only become more prolific as time goes on — to slip through. Even if there is a general expectation among your editorial team that all copy should be human-generated, you can be certain some of your tech-savvy reporters are dabbling in generative AI for tasks one or two steps removed from actual copy writing, such as research and image creation. If they don’t know where the lines are, how can you be sure they won’t step over them?

At least 39% of the people surveyed said their newsrooms are actively working on GenAI policy, and 21% are considering it. If the perceived complexity of the topic is holding them back, perhaps NPR’s guidelines can point the way: Being clear is much more important than being comprehensive.

If you’d like to hear more from me on how AI is changing journalism, head on over to This Week in Digital Media on Subtext and subscribe for free.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.