A joint white paper released Thursday by the Center for Journalism and Liberty at the Open Markets Institute and the newly launched Washington Monthly Institute argues that artificial intelligence is accelerating a long-running power imbalance between technology platforms and journalism—one that first took shape as digital advertising markets shifted revenue away from publishers and toward dominant tech firms.
What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/
The report, titled “AI and the Future of Independent Journalism: The Promise and Peril of Privately Controlled Data Markets for Media Content,” was written by CJL Director Dr. Courtney Radsch. It examines how AI systems now rely on journalistic content for training while simultaneously reducing traffic and compensation to the original reporting, placing particular pressure on small, local, and independent news organizations that lack bargaining power or legal resources.
Cloudflare’s Growing Footprint
The report focuses heavily on the role of infrastructure providers—particularly Cloudflare—in shaping the emerging rules of AI-driven data access. Cloudflare now enables publishers to distinguish between types of web crawlers, block AI training bots, and allow search indexing under its Content Signals Policy. It has also introduced cryptographic bot verification and a pilot pay-per-crawl marketplace that would allow publishers to charge AI companies for content use.
The paper calls these tools a “potential turning point” for publishers seeking compensation—but warns that without public oversight, they could concentrate gatekeeping power in a single infrastructure provider, effectively creating new chokepoints that replicate the extractive dynamics of the digital advertising era.
What the report recommends
The paper proposes several policy actions, including strengthening antitrust enforcement in AI markets, banning discriminatory access systems, mandating transparency for AI companies and infrastructure providers, preventing gatekeeping in AI content marketplaces, developing open technical standards for content licensing, supporting rights-based licensing frameworks, and ensuring accountability for automated systems that process journalistic content.
The report acknowledges that the tools currently in development represent the first real technical mechanism for publishers to enforce consent and receive compensation when their work is used in AI systems—but argues that technology alone cannot resolve the underlying imbalance.

How the report meets the AI moment
The paper arrives as several competing approaches to AI content licensing have taken shape. In April, BBC, FT, Guardian, Sky News, and The Telegraph launched a joint AI licensing coalition called SPUR, seeking collective negotiations with AI companies. Earlier in May, the RSL (Really Simple Licensing) standard reached version 1.0, an open protocol that lets publishers declare terms for AI use of their content; more than 1,500 publishers have since joined. Cloudflare has separately committed to embedding RSL licenses in HTTP 402 payment-required responses, a technical step that could give those terms real enforcement teeth.
The question now is whether these parallel efforts—industry coalitions, open licensing standards, and infrastructure-level interventions—can be reconciled into a system that publishers can actually use, or whether they fragment into yet another layer of complexity that benefits those with the resources to navigate it.
Edited by Pete Pachal


