Most newsrooms that have adopted AI policies have done something admirable — and insufficient. A new briefing from the Center for News, Technology & Innovation synthesizes 30 peer-reviewed research papers on AI governance and finds that existing policies get the principles right but skip the operational details journalists actually need to follow them.
What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/
Key Takeaways
- CNTI synthesis of 30 papers: newsroom AI policies are strong on principles, weak on practice.
- 52 newsrooms in 12 countries emphasize transparency and human supervision.
- Almost none address procurement or the operational steps journalists actually need.
The CNTI report, released Feb. 17, is the third briefing from the organization’s AI and Journalism Research Working Group. Reviewing policies from 52 global news organizations across 12 countries, researchers found that newsrooms consistently prioritize transparency about AI use, human supervision of AI tools and human verification of outputs. But few policies define what “appropriate” or “proper” AI use actually means in practice.
The gap matters. As one example, AI translation tools can introduce gender biases — assuming doctors are men, nurses are women — that a journalist using a third-party tool may never catch. Existing policies focus on AI outputs, not the systems that produce them, making these subtle errors nearly invisible.
The procurement blind spot is arguably the bigger problem. Researchers found almost no AI policies that address how newsrooms should evaluate, contract with or monitor third-party AI vendors. A 2025 study of 16 AI tool contracts found that most gave developers the right to change terms of service without notice — a risk most individual journalists aren’t even aware of. Meanwhile, newsrooms’ growing reliance on tools built by Google, Microsoft and Amazon deepens their dependence on the same platform companies that already control much of their distribution.
The policy gap isn’t limited to the Global North. A Thomson Reuters Foundation survey of 221 journalists in the Global South found that roughly 80 percent said their newsrooms have no AI policy at all. That number has likely improved since the survey was conducted, but the structural barriers — no access to technical expertise, difficulty getting organizational buy-in, the pace of technological change — haven’t gone away.
The working group’s practical recommendation: treat AI policy development the way you treat coverage decisions. Include people with different job responsibilities and lived experiences. Draw on the lessons of earlier technology policy cycles — photo editing, social media — where the same tension between values and operational specifics played out. And start thinking seriously about procurement: what your AI contracts actually say, who can change them, and whether your organization has the leverage to push back.
For most newsrooms, the answer to that last question is no — but knowing that is the first step toward addressing it.






