The Signs Newsrooms Are Getting Serious About AI

Credit: DALL-E

There’s been a steady drip-drip-drip of major publishers signing deals with AI companies for the past few months, but that doesn’t mean newsrooms are actually using the tech. One of the more amusing contradictions lately was when The Atlantic ran a piece by Jessica Lessin, CEO of The Information, about the dangers of the media getting too cozy with Big AI a few days before the publication signed an agreement with OpenAI.

That said, there are signs that the news media is getting serious about deploying AI in some form. On various job boards, more senior positions directing AI strategy are appearing, including jobs at Hearst Newspapers and Scripps. It’s not just the big players — with the influence to command direct deals with the likes of OpenAI — who are looking to leverage the power of large language models (LLMs).

While this is a welcome development, it feels a little late to me.

ChatGPT arrived in November 2022 with all the subtlety of one of those flying saucers from Independence Day. Now that AI is poised over the media, seemingly at the ready to activate its annihilator, newsrooms are finally figuring out they need to take an active role in shaping how AI will alter the media landscape.

Many journalists are understandably skeptical of how much AI can make a difference in their day-to-day. I think a great deal of that comes from the damage caused by the terrible experiments in early 2023, when a few publications and platforms thought generative AI was an easy path to creating content at scale.

It’s now much clearer that direct content creation is only one use case. Certainly there’s the low-hanging fruit of social posts and headlines, but several forward-thinking newsrooms are using AI in tools in novel ways:

The Washington Post and New York Times are mass-deploying spoken articles

Semafor has a research and writing assistant across languages

A host of publications and platforms are beginning to deploy AI summaries of articles

What these novel use cases don’t capture, however, are the subtle ways AI is being used — the quick questions to ChatGPT about story ideas, the AI-powered features in video editing tools that cut clips easily, the lightning-fast research aided by Perplexity. Those are creating better journalism, bit by bit, and newsrooms might want to consider systemizing and training around some of those use cases.

Of course, that requires investment and time. In journalistic environments, there are ethical and cultural factors to take into account, and navigating those is key to properly transition from experimentation to implementation. For resource-strapped newsrooms, it’s a tall order, but investing in AI now will ensure you can compete effectively later.

With the emergence of more AI-based jobs, it’s clear more newsrooms are making the choice to invest early. If yours hasn’t yet, I’m curious: what’s stopping you? Reply, comment or DM me with your thoughts if you have a second.

Subscribe now

The Chatbox

All the AI news that matters to media

What just happened? Don’t ask AI: The Washington Post spent a week using chatbots as a go-to source of real-time updates, and it turns out they suck at it. From the assassination attempt on former president Trump to President Biden dropping out of the election, chatbots such as ChatGPT and even Perplexity didn’t accurately answer questions about what happened in the hours soon after the incident. Microsoft Copilot apparently did the best of all the popular AI chatbots, since it typically generously cites sources.

But the overall poor performance makes sense: although browsing the web is fundamentally different from training, AI engines ultimately are looking for patterns in data, and when that pattern is just starting to be drawn, it’s difficult to process information that may contradict everything that’s come before. OpenAI et al. are certainly working to crack this nut for the inevitable ChatGPT Search, but until then it’s probably best to take the advice of the chatbots themselves on this: Better to go to trusted news sources for breaking news.

Credit: Meta

A grand opening: Say this about Mark Zuckerberg: Dude has a vision. In a lengthy treatise on the release of Llama 3.1, Meta’s latest and greatest open-source model, Zuckerberg laid out exactly why he believes open-source AI will move the industry forward and ultimately win out over commercial, closed-source models like OpenAI’s. The usual suspects of better security, cheaper cost, and avoiding vendor lock-in are all there, but Zuck’s analogy also draws an analogy to the rise in popularity of Linux as evidence that open-source is the way to go.

In a case of interesting timing Zuckerberg’s screed is contrasted by a group of Democratic senators sending a pointed letter to OpenAI, asking the notoriously private company to submit their next most powerful model for government review before releasing it. Also, it turns out one of the main worries about open-sourcing AI — the proliferation of deepfakes — isn’t as big a problem as we thought. Score one for open-source AI, at least this week.

Enterprise not thrilled with AI: The Information did a “reality check” about adoption of AI at the enterprise with Chevron CIO Bill Braun. The verdict? Not fully in yet, but it’s fair to say he’s less than thrilled with the results so far. The company’s Enterprise AI team has yet to find an AI-powered application that significantly alters the company’s bottom line. Perhaps they will eventually, but it’s a useful case study in how not to deploy AI at your company. Don’t just give a ton of people access to AI chatbots and tools and ask them to figure it out. Do instead focus on problem solvers and enthusiasts. Do some initial training so users understand the fundamentals of prompt engineering. And maybe don’t have high expectations right out of the gate. It’s going to take time for AI to become embedded in work; I’m sure Microsoft Word didn’t create massive gains to balance sheets in its first year, either.

Bringing the AI heat to climate change: The Washington Post may be going through some stuff, but it’s slowly becoming a leader among the major news outlets in reader-facing AI features. After creating spoken versions of its news articles with synthetic audio, the Post has created a chatbot trained specifically on its coverage of climate change, which was covered extensively by Nieman Lab. Since much of the interest in climate change tends to be on ideas that are essentially evergreen — asking about causes, effects, overall seriousness — but also not specific to any single news story, climate change might be the newsroom topic best suited for the chatbot use case.

The Media Copilot is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.