Key Takeaways
- Alliance for Audited Media is developing an ethical AI certification framework.
- CEO Richard Murphy: standards are needed because AI is already operational.
- Certification could separate responsible use from opaque automation.
By The Copilot
AI is no longer experimental in media. It is operational.
What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/
From drafting articles to generating images to influencing distribution, artificial intelligence is now embedded across the entire content pipeline in many organizations. But as adoption accelerates, trust is breaking down just as fast.
In this episode of The Media Copilot, Pete Pachal talks with Richard Murphy, CEO of the Alliance for Audited Media, to unpack a growing industry response: ethical AI certification.
Murphy explains how publishers, advertisers, and audiences are all asking the same question in different ways: How do we know what is real, who created it, and whether we can trust it?
The answer, at least in part, may lie in standards.
Drawing from AAM’s newly developed framework, Murphy walks through the pillars of responsible AI use, from transparency and disclosure to human oversight and data protection. The goal is not to slow innovation, but to create guardrails that keep media credible in an era where AI can generate anything.
Listen or watch:
Why this matters
Media has always relied on trust as its currency. AI is testing that foundation.
When audiences cannot tell whether content is human-created, AI-assisted, or fully synthetic, credibility becomes fragile. At the same time, advertisers and partners are demanding proof that what they are funding or distributing meets ethical standards.
This is where certification enters the picture.
Ethical AI frameworks are quickly becoming more than best practice. They are positioning themselves as a competitive advantage, a compliance strategy, and potentially a defense against future regulation.
The bigger shift is this: AI is not just changing how content is created. It is redefining what accountability looks like in media.
What we cover
- What “ethical AI certification” actually means in practice
- The 8 pillars of responsible AI use in media organizations
- Why disclosure is moving from optional to essential
- The difference between AI-assisted vs fully AI-generated content
- Where most trust failures are happening today
- Why self-regulation may be the industry’s best shot before government intervention
- How AI is impacting not just content creation, but distribution and business models
- The growing role of advertisers, partners, and audiences in demanding transparency

About the 👤 Guest
LinkedIn : https://www.linkedin.com/in/rmurphy01
AAM Leadership Bio: https://auditedmedia.com/about/leadership
Alliance for Audited Media): https://auditedmedia.com
Digital Content Next (Articles): https://digitalcontentnext.org/blog/author/richmurphy/
About the show:
Enjoyed this episode?
Subscribe to The Media Copilot on Substack, Apple Podcasts, Spotify, or your favorite app. On YouTube? Tap the Like button and Subscribe to the YouTube channel.
For more AI tools and resources built for media professionals, visit MediaCopilot.ai.
Produced by Pete Pachal and Executive Producer Michele Musso
Edited by the Musso Media Team
Music: “Favorite” by Alexander Nakarada, licensed under CC BY 4.0
All rights reserved. © AnyWho Media 2026
