• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
The Media Copilot

The Media Copilot

How AI is changing Media, journalism and content creation

  • News
  • Reviews
  • Guides
  • AI Courses
    • AI Quick Start
    • AI for PR & Communications Professionals
    • AI for Journalists
    • Custom AI Training for Teams
  • Newsletter
  • Podcast
  • Events
    • GEO Dinner Series
    • Webinars
  • About

Why AI content labels keep failing the people who need them most

The Emily Hart case reveals a gap between what platforms promise on AI transparency and what users encounter in their feeds.

AI content labels exist. Platforms just don't want you to see them. (Credit: Google Gemini)
May 12, 2026

By Pete Pachal

Fake accounts are as old as social media itself. So when it came to light that a “hot girl” MAGA personality named Emily Hart was actually a 22-year-old male medical student in India, it could have been dismissed as just another internet deception story. Just another catfisher, another sock puppet, another scammer—the internet is full of them.

What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Next class May 8. Get 20% off with the code AIPRO: https://mediacopilot.ai/

But this case was different. This one had photos. And videos. And thousands of followers across multiple networks with some posts getting millions of views. Emily Hart was a full-on influencer, not just some anonymous egg. The person who created Emily confessed to Wired that while the account was active, he was making thousands of dollars every month from posting softcore videos to an OnlyFans competitor and merchandising.

Emily’s creator is not a developer. He’s just a cash-strapped student with a good sense of American political culture and a Google Gemini account. Yet the Emily Hart story has done more than expose one fraud. It’s put a spotlight on how thoroughly AI has lowered the barrier for almost anyone to produce convincing content and manipulate social media’s engagement systems.

That reality raises a set of urgent questions. Is anyone looking out for us out there? How can you tell what’s real and what’s not anymore? And who is responsible for alerting social media users that the images they’re looking at might have come from AI?

How cheap AI tools made fake influencers scalable

The real significance of the Emily Hart story has little to do with a single fake account. The major implication is that this is the tip of the iceberg. AI has made creating online personas like Emily so easy that it’s enabled deception at scale. The Wired story points to other pro-Trump fake influencers like Jessica Foster, but you don’t have to look very far in your Instagram Explore page before you spot something AI-generated, and it’s rarely disclosed. The Emily Hart case proves that the template is cheap, fast, lucrative, and easy to copy.

Every major social network has policies that address AI-generated content. While they vary in detail, the gist is generally the same: Synthetic images must be disclosed—especially if it could be construed as real and the subject matter involves sensitive subjects like politics, health, finance, and current news. If the account doesn’t identify AI content, it could be frozen, demonetized, or banned.

In practice, those consequences almost never materialize. Enforcement is difficult, partly because detecting AI content is getting more difficult by the day. Most state-of-the-art image generators are light-years ahead of the models that created the first “Will Smith eating spaghetti” video, and telltale artifacts like extra fingers and disappearing background characters have largely become a thing of the past. Without watermarks, even automated systems have a difficult time parsing AI images from real ones just by looking at them.

Content Credentials and the AI labeling problem

A new standard was supposed to fix this. Content Credentials are a way to track how an image was created and modified throughout its life cycle. That provenance data can live in the image’s metadata, so the site displaying it can more easily tell whether it’s AI-generated, potentially passing on a label or warning to the user. The idea is that, as you scroll your social feed, any image would have a tiny icon next to it that would reveal its history when clicked.

However, even though this technology has existed for years and ostensibly has the support of major tech companies such as Adobe, Google, and Nvidia, social platforms haven’t adopted it consistently. Seeing the label is rare, and a Washington Post report found that social networks often strip out the metadata that enables Content Credentials. The stripping isn’t necessarily a deliberate act of sabotage — it follows a best practice from the early days of the web when every byte was precious. But the fact that it’s still happening shows there is little enthusiasm to make the system work.

Does labeling even change behavior? Emily’s creator says he believes many of his followers didn’t care whether the images he was posting were AI or not. That may be true for some, but data suggest labels can alter people’s propensity to engage with AI content. A 2024 study found that labels on AI-manipulated media reduced belief in the claims. The study also found that wording matters: “manipulated” or “false” were more impactful than process-based labels alone.

Put another way: labels work, but toothless labels work poorly. A buried “AI info” tag is not the same as a clear warning that an image might depict a person who does not exist.

The technical capacity to do better clearly exists. Platforms like Facebook, Instagram, YouTube, and TikTok already process and modify content at scale. They’ve spent two decades building the art of detecting copyright violations, nudity, spam, and engagement signals. It is hard to believe they are incapable of building a clearer label for AI-generated people.

  • Subscribe to our newsletter

    How AI is changing media, journalism, and content creation.

    Learn More

Why platforms have reason to keep AI labels weak

The question then becomes: why haven’t they? The uncomfortable answer is that the incentives point the other way. While platforms want to keep bad content out, they are more motivated to keep people posting, scrolling, sharing, and buying. AI-generated material fits neatly into that machine because it is cheap to make, easy to personalize and highly compatible with engagement-driven feeds.

Mark Zuckerberg has been unusually direct about this, describing AI-generated material as “a whole new category of content” that he sees as important for Facebook, Instagram and Threads. That framing doesn’t signal that Meta or any other platform actively wants deception — deception is a subcategory of AI content, not the whole thing. But it does mean the companies have a business reason to welcome more synthetic content, and making the labels too strong or too visible could dampen the engagement they’re trying to encourage.

External pressure could shift the math, though. Europe’s AI Act includes transparency obligations for deepfakes and certain AI-generated public-interest content, with related rules taking effect this year. Should platforms start to rack up major fines for poor labeling, things could change in a hurry. Advertiser pressure would help, too, since appearing next to deceptive content is bad for business. Finally, and crucially, there’s audience behavior: if users begin to feel like they can’t trust what they’re seeing on a network, they might, over time, stop engaging with that network.

The disclosure system failure

At the moment, detecting AI content has become largely the user’s problem, with social platforms not prioritizing the technical progress that might help, and regulators only beginning to act. And you might question what’s the point — many of Emily’s followers no doubt knew she was virtual but followed, engaged, and maybe even forked over some money anyway. But that calculus depends entirely on having information. The choice to engage or not with a virtual influencer is robbed from you if you don’t know it’s virtual in the first place.

The technology industry has spent years presenting provenance as a central answer to synthetic media. Adobe, Microsoft, Meta, OpenAI, Google and others have backed standards, joined coalitions, made public commitments and embedded Content Credentials into their tools. Fine. Then show it to people. Make it visible before the share, before the follow, before the subscription, before the merchandise purchase. Because if the only way to learn that an influencer is fake is to wait for a magazine investigation, the disclosure system has already failed.

A version of this column appears in Fast Company.

Contributors

  • Pete Pachal: Author

    Pete Pachal is the founder of The Media Copilot. In addition to producing the site’s newsletter and podcast, he also teaches courses on how journalists and communications professionals can apply AI tools to their work. Pete has a long career in journalism, previously holding senior roles in global newsrooms such as CoinDesk and Mashable. He’s appeared on Fox Business, CNN, and The Today Show as a thought leader in tech and AI. Pete also puts his encyclopedic knowledge of Doctor Who to good use on the popular podcast, Pull To Open.

Category: AI media analysisTags:AI content| gemini| Content Credentials
Share this post:
FacebookTweetLinkedInEmail
  • Related articles

Editorial illustration of a news article being poured into multiple media format containers

Why liquid content is harder than it looks

Read moreWhy liquid content is harder than it looks
typewriter with AI chatbot

Journalists are opening up about AI, but one mistake shows how fragile that progress is

Read moreJournalists are opening up about AI, but one mistake shows how fragile that progress is

UK and US financial regulators hold emergency meetings over Anthropic’s Claude Mythos

Read moreUK and US financial regulators hold emergency meetings over Anthropic’s Claude Mythos

AP offers buyouts as AI and tech companies now drive revenue growth

Read moreAP offers buyouts as AI and tech companies now drive revenue growth

New York Times cuts ties with freelancer over AI-assisted book review

Read moreNew York Times cuts ties with freelancer over AI-assisted book review

Wikipedia bans AI-generated text from its 7.1 million articles

Read moreWikipedia bans AI-generated text from its 7.1 million articles

The Media Copilot

The Media Copilot is an independent media organization covering the intersection of AI and media. Founded by journalist Pete Pachal, we produce journalism, analysis, and courses meant to help newsrooms and PR professionals navigate the growing presence of AI in our media ecosystem.

  • LinkedIn
  • X
  • YouTube
  • Instagram
  • TikTok
  • Bluesky
  • About The Media Copilot
  • Advertising & Sponsorships
  • Our Methodology
  • Privacy Policy
  • Membership
  • Newsletter
  • Podcast
  • Contact

© 2026 · All Rights Reserved · Powered by Springwire.ai · RSS