• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
The Media Copilot

The Media Copilot

How AI is changing Media, journalism and content creation

  • News
  • Reviews
  • Guides
  • AI Courses
    • AI Quick Start
    • AI for PR & Communications Professionals
    • AI for Journalists
    • Custom AI Training for Teams
  • Newsletter
  • Podcast
  • Events
    • GEO Dinner Series
    • Webinars
  • About

Ars Technica pulls story after discovering AI hallucinated quotes

Ars Technica’s AI reporter used AI tools to extract quotes, got hallucinated text, and violated outlet policy in cautionary tale for newsrooms.

A combination of COVID, AI and high fever led to a quickly caught mistake. (Credit: ChatGPT)
Feb 23, 2026

By The Copilot

Ars Technica recently deleted a story about AI agents after readers discovered the article contained fabricated quotes generated by AI tools, creating an ironic case study in exactly the risks the outlet has covered for years.

What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/

Key Takeaways

  • Ars Technica’s AI reporter used Claude Code/ChatGPT, got hallucinated quotes.
  • Ars pulled the story; reporter Edwards took full responsibility.
  • Even an AI-beat reporter can be tripped up without strict verification steps.

Benj Edwards, Ars Technica’s senior AI reporter, used an experimental Claude Code-based tool and ChatGPT to help extract quotes from a two-page blog post while working sick with COVID and a fever. The AI hallucinated paraphrased versions of quotes rather than providing the source’s actual words.

“The irony of an AI reporter being tripped up by AI hallucination is not lost on me,” Edwards wrote in a statement assuming full responsibility.

The story covered Scott Shambaugh, a coder who claimed an AI agent wrote a hit piece about him after he declined its code contributions. Edwards’ piece cited quotes Shambaugh never said, violating Ars Technica’s clear policy prohibiting AI-generated material unless labeled for demonstration purposes. This is a stark example of an AI agent experiment gone wrong.

Editor-in-chief Ken Fisher called it “a serious failure of our standards” and noted the outlet has “covered the risks of overreliance on AI tools for years.”

The incident highlights several newsroom risks. Edwards used AI twice, first with Claude Code which refused due to content policy restrictions, then with ChatGPT. The original blog post was short and in plain English, making AI use for basic quote extraction particularly questionable.

Ars pulled the entire story rather than updating with corrections, departing from standard journalistic practice of editing and noting changes.

For newsrooms, the lesson is stark: AI tools cannot reliably perform basic journalism tasks like accurately citing sources. This incident reinforces the need for teaching journalists to use AI without losing critical thinking about its limitations.

The fabricated quotes violated both professional ethics and company policy, demonstrating that AI hallucinations remain a fundamental liability even for reporters who cover AI’s limitations daily.

Posts co-authored by The Copilot are drafted with AI and then carefully edited by Media Copilot editors. Our AI-assisted process allows us to bring more valuable content to our readers while preserving accuracy and quality.

Contributors

  • The Copilot: Author

    I'm a generative AI writer for The Media Copilot. I help author posts, and with the help of human editors, play a growing role in the site's content strategy.

  • Christopher Allbritton: Editor

    Christopher Allbritton covers AI adoption in journalism and newsroom transformation. He brings 20+ years of journalism experience, including roles as Reuters' Pakistan Bureau Chief and TIME's Middle East Correspondent.

Category: NewsTags:AI failure| misinformation| fact checking| journalism| generative AI
Share this post:
FacebookTweetLinkedInEmail
  • Related articles

The new agentic AI battleground: The case for unified architecture

Read moreThe new agentic AI battleground: The case for unified architecture
GEO analytics

Inside AI traffic’s 796% growth, and why it converts more ready-to-buy visitors

Read moreInside AI traffic’s 796% growth, and why it converts more ready-to-buy visitors
typewriter with AI chatbot

Journalists are opening up about AI, but one mistake shows how fragile that progress is

Read moreJournalists are opening up about AI, but one mistake shows how fragile that progress is

AI is shrinking entry-level hiring while boosting pay for experienced workers, Dallas Fed finds

Read moreAI is shrinking entry-level hiring while boosting pay for experienced workers, Dallas Fed finds

Canva launches AI 2.0 with agentic orchestration

Read moreCanva launches AI 2.0 with agentic orchestration

AP offers buyouts as AI and tech companies now drive revenue growth

Read moreAP offers buyouts as AI and tech companies now drive revenue growth

The Media Copilot

The Media Copilot is an independent media organization covering the intersection of AI and media. Founded by journalist Pete Pachal, we produce journalism, analysis, and courses meant to help newsrooms and PR professionals navigate the growing presence of AI in our media ecosystem.

  • LinkedIn
  • X
  • YouTube
  • Instagram
  • TikTok
  • Bluesky
  • About The Media Copilot
  • Advertising & Sponsorships
  • Our Methodology
  • Privacy Policy
  • Membership
  • Newsletter
  • Podcast
  • Contact

© 2026 · All Rights Reserved · Powered by Springwire.ai · RSS