• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
The Media Copilot

The Media Copilot

How AI is changing Media, journalism and content creation

  • News
  • Reviews
  • Guides
  • AI Courses
    • AI Quick Start
    • AI for PR & Communications Professionals
    • AI for Journalists
    • Custom AI Training for Teams
  • Newsletter
  • Podcast
  • Events
    • GEO Dinner Series
    • Webinars
  • About

Journalists are opening up about AI, but one mistake shows how fragile that progress is

As prominent journalists go public with their AI workflows, a plagiarism scandal at The New York Times reveals how quickly momentum can reverse

typewriter with AI chatbot
The trust between journalists and AI is real, but one plagiarized book review just showed how thin the glass actually is. (Credit: Gemini)
Apr 21, 2026

By Pete Pachal

My usual focus is the cutting edge of AI in media, examining how journalists and media companies are using the technology to change the way they work, reach new audiences, and transform their organizations. But the reality is that a persistent stigma still hangs over artificial intelligence in the journalism world. In conversations I have with working reporters and editors, there’s clearly still a lot of reluctance, if not outright disdain, for using AI in almost any part of their work.

What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/

Recent media coverage, though, paints a different picture. The Wall Street Journal recently profiled how Fortune business editor Nick Lichtenberg uses AI to turbocharge his output, sometimes writing as many as seven stories in a single day. The same day, Wired highlighted how several prominent reporters—including independents like Alex Heath and Taylor Lorenz as well as The New York Times’ Kevin Roose—use AI in various editorial tasks, sometimes in the writing itself.

Taken together, it feels like a dam has finally burst. And I don’t think the timing is accidental—this shift is happening alongside the arrival of Claude Code and Cowork, which has put remarkably powerful agentic AI within reach of everyone and reshaped what people expect from these tools. (An interesting aside buried in all this coverage of journalists’ use of AI is that it appears Claude is rapidly becoming what the Mac became among media pros: the platform of choice for creatives who “know better.”)

A plagiarism scandal puts AI trust on ice

But just as the relationship between journalists and AI seemed to be thawing, a high-profile incident threw it back into doubt. Last week, The New York Times severed its relationship with a freelance writer who had submitted a book review that was at least partially AI-written. The review by Alex Preston, published in early January, included passages that were nearly identical to Christobel Kent’s review of the same book that was published in The Guardian months earlier.

Preston admitted he used AI to assist in writing his book review, saying that he had “made a serious mistake.”

The episode is a clear wake-up call for the Times—and not its first—about communicating AI policy to freelancers. But it also sends a warning signal to every newsroom that has been inching toward greater AI adoption. Here, suddenly, was an error that appeared to validate all the restrictive rules.

Confronting what happened directly matters. The incident steers us back into the dark cave of AI scandals in media—from CNET’s bot-authored service journalism to the made-up book titles in the Chicago Sun-Times’ “summer reading list” last year. It risks erasing the productivity and content optimization gains that many journalists and newsrooms have been making, and could push those just beginning to experiment with AI back toward the simplest possible rule: don’t use it at all.

That makes it essential to examine specifically how AI was deployed here, so we can draw a clearer line between responsible and irresponsible use. It’s easy to say there wasn’t enough “human in the loop” (an increasingly unhelpful term)—but where in the loop? With prompting, fact-checking, something else? The whole point of AI is to outsource some human decision-making to sophisticated machines, so rather than pointing out the obvious—that humans need to shape and monitor the process—it’s better to zero in on the specific decisions that AI was asked to make, and whether the human gave the right parameters and restrictions.

When you look at the details, the answer is clearly no. According to The Guardian story, the two reviews have eerily similar language—so close that it’s difficult to argue against outright plagiarism. Consider these side-by-side passages:

  • Original review, published August 21, 2025: “most significantly a song of love to a country of contradictions, battered, war-torn, divided, misguided and miraculous: an Italy where life is costume and the performance of art, and where circuses spring up on wasteland.”
  • Times review, published January 6, 2026: “populate what is ultimately a love song to a country of contradictions: battered, divided, misguided and miraculous. This is an Italy where life is performance, where circuses rise on wasteland.”

Given the dates and the undeniable overlap, a few things become clear. Preston evidently asked the AI—directly or indirectly—to generate text he planned to use in the piece, and not just from his own notes. The four-month gap between the two reviews (and likely an even longer lead time given the Times’ editing process) almost certainly means the AI’s training data didn’t include Kent’s review. That points to the AI tool pulling from web search (also known as RAG) to produce the copy.

This was the critical error. Giving Preston the benefit of the doubt, he may not have deliberately told the AI he was using to synthesize other reviews of the book, and perhaps it grabbed The Guardian review on its own. But he certainly didn’t tell the AI not to do that, which would seem to be an essential part of your prompt if you want to avoid the very plagiarized text he ended up including.

Moving from stigma to smart adoption

It bears repeating: in most cases, how you use AI matters far more than whether you use it. Getting there requires deep familiarity with these tools’ strengths and weaknesses, careful attention to prompt design, and a commitment to continuous adaptation. It’s an ongoing process, and it needs guardrails—such as “always” and “never” commands to avoid specific problems and (human) fact-checking. Without those safeguards, you’re handling a loaded weapon that can easily misfire.

Broader structural protections help, too. Whether you’re an independent writer or a full newsroom, it pays to have an AI policy. As a media AI trainer, I of course would encourage investing in training, but I think it’s still objectively a good idea. But most importantly, the trial-and-error that comes with figuring out the boundaries of “good AI” should be kept out of public view if you can avoid it.

When it comes to AI-assisted writing specifically, developing your prompts and safeguards in a private sandbox is critical. That might seem obvious, but one of AI’s most deceptive qualities is that it produces outputs that look indistinguishable from work that went through a rigorous human process. To someone without experience, that surface-level competence feels sufficient.

Truly making AI work as a writing and journalism partner means going beyond trusting the process—it means accepting responsibility for building, testing, and refining that process yourself. The more journalists do that, the more the stigma will fade.

A version of this column appears in Fast Company. It has been lightly “remixed” (alternate words and phrasings used) with AI assistance and human review.

Contributors

  • Pete Pachal: Author

    Pete Pachal is the founder of The Media Copilot. In addition to producing the site’s newsletter and podcast, he also teaches courses on how journalists and communications professionals can apply AI tools to their work. Pete has a long career in journalism, previously holding senior roles in global newsrooms such as CoinDesk and Mashable. He’s appeared on Fox Business, CNN, and The Today Show as a thought leader in tech and AI. Pete also puts his encyclopedic knowledge of Doctor Who to good use on the popular podcast, Pull To Open.

Category: AI media analysisTags:AI content| New York Times| journalism| AI policy
Share this post:
FacebookTweetLinkedInEmail
  • Related articles

Why AI content labels keep failing the people who need them most

Read moreWhy AI content labels keep failing the people who need them most
Editorial illustration of a news article being poured into multiple media format containers

Why liquid content is harder than it looks

Read moreWhy liquid content is harder than it looks

UK and US financial regulators hold emergency meetings over Anthropic’s Claude Mythos

Read moreUK and US financial regulators hold emergency meetings over Anthropic’s Claude Mythos

AP offers buyouts as AI and tech companies now drive revenue growth

Read moreAP offers buyouts as AI and tech companies now drive revenue growth

New York Times cuts ties with freelancer over AI-assisted book review

Read moreNew York Times cuts ties with freelancer over AI-assisted book review

Journalism students are more skeptical of AI than their professors

Read moreJournalism students are more skeptical of AI than their professors

The Media Copilot

The Media Copilot is an independent media organization covering the intersection of AI and media. Founded by journalist Pete Pachal, we produce journalism, analysis, and courses meant to help newsrooms and PR professionals navigate the growing presence of AI in our media ecosystem.

  • LinkedIn
  • X
  • YouTube
  • Instagram
  • TikTok
  • Bluesky
  • About The Media Copilot
  • Advertising & Sponsorships
  • Our Methodology
  • Privacy Policy
  • Membership
  • Newsletter
  • Podcast
  • Contact

© 2026 · All Rights Reserved · Powered by Springwire.ai · RSS