• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
The Media Copilot

The Media Copilot

How AI is changing Media, journalism and content creation

  • News
  • Reviews
  • Guides
  • AI Courses
    • AI Quick Start
    • AI for PR & Communications Professionals
    • AI for Journalists
    • Custom AI Training for Teams
  • Newsletter
  • Podcast
  • Events
    • GEO Dinner Series
    • Webinars
  • About

The worst thing AI did to misinformation was make it ordinary

AI is making scams and bad info routine. Journalists can’t chase every lie, but they can teach people how to verify.

Misinformation doesn’t have to go viral to do damage. When it becomes routine, the only workable defense is routine verification. (Credit: Midjourney)
Feb 17, 2026

By Pete Pachal

If you run any kind of media business in 2026, you develop a strange new hobby: speed-running your own gullibility. Every week—honestly, most days—something drops into my inbox offering to “unlock” my growth: more newsletter subscribers, a bigger podcast audience, a fatter pipeline of client leads. I’ve learned to treat these pitches like background noise. Still, a few are so polished they feel tailored, the kind that poke right at the soft spots (“you’re leaving so much on the table,” et al.). I never reply. But I do occasionally catch myself asking the annoying question: Which of these are real?

What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/

`A few months ago, I decided to outsource that doubt. I was reading one of those emails and opened the Assistant sidebar in my AI-powered browser. I typed, “this look sus?” The assistant didn’t hesitate. Yes, it said: the pitch—about finding funding for The Media Copilot—left out basic details any legitimate org would include. And the sender? An email address tied to a nonexistent domain, plus no LinkedIn profile. Not subtle. Just efficient.

`That moment stuck with me as I read in Time about a team at MIT that runs an online portal tracking harmful AI incidents. Their running tally makes the trend hard to ignore: the use of AI to cause harm, intentionally or not, has increased significantly over the past few years. Some of it is garden-variety error, some of it is deliberate. The fastest-growing buckets are the ones you’d expect: misinformation and malicious actors. If your goal is to mislead, misinform, or straight-up scam people, it’s never been cheaper—or easier—to operate at scale.

In theory, this is where journalism steps in. After all, one of the media’s jobs is to provide a check on misinformation. When those Biden robocalls were making the rounds, for example, the debunking was swift. But that’s the highlight reel. Most incidents don’t go viral, don’t make national headlines, and don’t trigger an army of fact-checkers. Meanwhile, the number of journalism jobs keeps shrinking, and the reporters who remain have the same constraint as everyone else: finite bandwidth.

Doubt needs direction

As misinformation from AI scales up, it’s creating a world where everyone is increasingly skeptical of what they read, see, and hear. That reflex is understandable—and corrosive. Last year, a paper from the National Bureau of Economic Research found that exposure to AI-driven misinformation led to less trust in media in general. So yes, skepticism is spreading. But skepticism, on its own, doesn’t produce clarity. It produces exhaustion.

This is where the media can still matter, even if it can’t possibly chase every fake. The most valuable move isn’t to debunk every deepfake or scam, which is clearly a losing battle. It’s to teach people how to aim their skepticism. There’s value in having a simple method for stress-testing what you see without spiraling into a “nothing is true” worldview.

The irony is that the verification tools are no longer locked inside newsrooms. They’re sitting in everyone’s browser, in everyone’s phone, baked into the same AI systems that are helping bad actors crank out lies. These tools can quickly check sources, analyze claims, and surface supporting evidence. That doesn’t mean you should treat a chatbot like an oracle about a story. But it does mean AI can be used as a lens, one that nudges you toward better questions, not instant certainty.

Think about my email example: The assistant didn’t “decide” what was true; it did the tedious work fast—looking up subjects, flagging inconsistencies, and pointing me toward new questions. That’s journalism, minus the byline. And if journalists can translate that mindset into practical guidance, readers get something better than a one-off debunk. They get a repeatable habit that helps them spot bad info, and avoid reflexively tossing the good info, too.

Keeping your guard up without giving up

So what does an “AI verification layer” actually look like in the wild? Start here: skepticism is the beginning of the process, not the finish line. Used well, it’s a tool for interrogation. Used poorly, it’s a shortcut to confirmation bias, where every vague suspicion becomes “proof” that you were right to distrust everything. Below are three habits, each rooted in basic journalistic principles, that work with almost any AI tool.

  • Ask the same question twice: A lot of AI harm doesn’t begin with malice. It begins with a user asking something ordinary, then getting nudged down a rabbit hole that gets darker or weirder with each turn. Sometimes it ends tragically. One simple way to interrupt that slide is to ask the same question again, but rephrased or reframed. Then compare what you get back. If the answers materially disagree, don’t hand-wave it away—treat the inconsistency as the story.
  • Force specificity: Good interviewers don’t let big claims float by unchallenged. When someone declares something sweeping, they press for the who/what/when. Do the same with AI. Ask it to make the claim more specific. What supports that assertion? Who was involved? What are the underlying facts? When did it happen? If the tool can’t move from generalities to concrete details, it’s a signal that the information might be thin, shaky, or invented.
  • Spot-check sources: If a claim hinges on something “out there on the internet,” verification shouldn’t be an epic quest. Follow the link. Look for the primary source. Cross-check a key detail. If you can’t confirm it in a minute or two, pause before you share it or build an opinion on top of it. Yes, there are exceptions—anonymous sources exist, and some real information is genuinely hard to verify. But friction is informative. When everything gets slippery, that’s the moment to slow down.

Between AI hallucinations, deliberate disinformation, and the way meme culture blurs seriousness into vibes, it’s no wonder skepticism is becoming the default posture. But without a few guiding principles, skepticism doesn’t stay healthy for long. It curdles into cynicism. Journalists may not be able to verify all the things we want them to. Still, the discipline behind their work—the questions they ask, the standards they lean on—can be taught. And if those habits spread, news consumers can learn to separate good information from bad, even at scale.

Contributors

  • Pete Pachal: Author

    Pete Pachal is the founder of The Media Copilot. In addition to producing the site’s newsletter and podcast, he also teaches courses on how journalists and communications professionals can apply AI tools to their work. Pete has a long career in journalism, previously holding senior roles in global newsrooms such as CoinDesk and Mashable. He’s appeared on Fox Business, CNN, and The Today Show as a thought leader in tech and AI. Pete also puts his encyclopedic knowledge of Doctor Who to good use on the popular podcast, Pull To Open.

Category: AI media analysisTags:misinformation| journalism
Share this post:
FacebookTweetLinkedInEmail
  • Related articles

typewriter with AI chatbot

Journalists are opening up about AI, but one mistake shows how fragile that progress is

Read moreJournalists are opening up about AI, but one mistake shows how fragile that progress is

AP offers buyouts as AI and tech companies now drive revenue growth

Read moreAP offers buyouts as AI and tech companies now drive revenue growth

New York Times cuts ties with freelancer over AI-assisted book review

Read moreNew York Times cuts ties with freelancer over AI-assisted book review

Journalism students are more skeptical of AI than their professors

Read moreJournalism students are more skeptical of AI than their professors

Breaking news is up 103% on Google as AI Overviews gut everything else

Read moreBreaking news is up 103% on Google as AI Overviews gut everything else

NewsGuard and Pangram are building an AI slop detector as content farms multiply

Read moreNewsGuard and Pangram are building an AI slop detector as content farms multiply

The Media Copilot

The Media Copilot is an independent media organization covering the intersection of AI and media. Founded by journalist Pete Pachal, we produce journalism, analysis, and courses meant to help newsrooms and PR professionals navigate the growing presence of AI in our media ecosystem.

  • LinkedIn
  • X
  • YouTube
  • Instagram
  • TikTok
  • Bluesky
  • About The Media Copilot
  • Advertising & Sponsorships
  • Our Methodology
  • Privacy Policy
  • Membership
  • Newsletter
  • Podcast
  • Contact

© 2026 · All Rights Reserved · Powered by Springwire.ai · RSS