The best proof of how deeply AI has woven itself into everyday life is the shorthand we’ve adopted for it. It’s now extremely common for someone to say they asked “chat” for some piece of information. We all know what they mean.
What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/
Key Takeaways
- BBC, FT and Guardian launched SPUR to set joint AI licensing standards.
- Cloudflare’s Pay Per Crawl gives the coalition technical leverage.
- SPUR is recruiting publishers globally before AI access rules harden.
But if you’d prefer to quantify the matter, OpenAI recently revealed that ChatGPT has 900 million users, up from 800 million in the fall. Gemini, Copilot, and Claude are all gaining ground, too. Even without those competitors, ChatGPT’s trajectory alone would be enough for the media industry—publishers, brands and marketing/PR agencies—to really internalize that AI is becoming a major discovery channel. It may not drive traffic like Google, but it has clearly inserted itself as a meaningful layer between content creators and the people who consume their work.
This shift explains the growing interest in GEO (generative engine optimization) in recent months, a topic I’ve written about often. But the focus on how to get AI search engines to notice and reference content doesn’t mean publishers should ignore a more fundamental question: how did that content end up inside these systems to begin with, and what kind of compensation—if any—should flow from that?
Public opinion, at least, seems clear. Surveys, such as this one from OnMessage last fall, consistently show people think content providers should be compensated when their work is ingested by AI engines. The AI industry tends to have a different view, often suggesting that “publicly available” data (i.e., stuff on the internet) is fair game. The reality is more complicated, but the core dynamic remains straightforward: The AI companies have the leverage, and publishers by and large don’t.
Resistance gets organized
That imbalance is exactly what a new industry group hopes to correct. In late February, a group of U.K. media companies—including the BBC, the Financial Times, and The Guardian—announced the creation of SPUR, or Standards for Publisher Usage Rights. In an open letter, the companies laid out the group’s mission: “to establish shared technical standards and responsible licensing frameworks that ensure AI developers can access high quality, reliable journalism in legitimate, responsible and convenient ways.”
In practice, SPUR aims to give the publishing industry a collective voice in negotiations with AI companies. Currently, publishers have a hodgepodge of solutions: You could pursue a licensing deal with one of the big AI companies, an option available only to publishers above a certain size. You could sue the AI companies, an expensive proposition. Or you could try to defend your content through a combination of paywalls, bot-blocking protocols, and nascent technologies aimed at getting AI crawlers to pay for access.
The underlying premise of SPUR is straightforward: strength in numbers. Although it’s beginning with a handful of U.K. publishers, the organization has begun actively recruiting media companies around the world. By taking collective action, which the news media is traditionally allergic to, SPUR believes it can create enough critical mass to define some kind of framework for how AI services will pay for access to content.
The group’s chances improve even more with powerful allies in the mix. Last year, Cloudflare stepped into this fight, advocating on the side of publishers. The company also brought significant technical muscle to the effort: because a large share of web traffic flows through Cloudflare’s infrastructure, it wields considerable influence over the rules governing online access—and which ones actually get enforced. As part of its push against unauthorized AI scraping, it introduced Pay Per Crawl, a tool that lets publishers charge bots for content access.
Cloudflare’s solution is actually one of several on the market, and while SPUR says it doesn’t intend to play favorites, Pay Per Crawl is exactly the kind of technical barrier the group was created to encourage. And the need for such barriers is acute: unauthorized AI crawling is widespread. TollBit, which publishes quarterly reports about bot activity, recently highlighted the problem of third parties leveraging virtual, “headless” browsers (essentially bots accessing sites as if they were humans and then scraping them) on an industrial scale to crawl vast amounts of data—the equivalent of a fishing trawler.
For the longest time, the only technical weapon digital publishers had was the robots exclusion protocol (robots.txt), but it’s an honor system that can easily be ignored or bypassed. According to sources familiar with SPUR, the coalition’s primary objective is equipping publishers with stronger defenses. By making it more difficult and cost-prohibitive for AI crawlers to access content, the theory goes that bot operators will have no choice but to come to the table and negotiate.
The line between user and bot
The biggest wild card here is agents. AI companies crawl publisher content for three main reasons: to feed training datasets, to power search results, and to fulfill individual user queries. It’s the last category that is proving very contentious and the impetus behind a war of words between Perplexity and Cloudflare last summer. Historically, user agents have been exempt from blocking because they function as proxies for real people rather than bulk scraping operations. Importantly, though, they don’t behave as humans (for example, they don’t look at ads), so many sites (and especially publishers) believe they should be entitled to block them.
There are calls to regulate this dimension of AI crawling, and it factors into the ongoing lawsuits between media companies and the AI industry. But those approaches drag on; SPUR is acting now. It’s easy to see how this could escalate into an arms race. When individual publishers were going it alone against the AI industry, they were badly outmatched. But a broad industry coalition, backed by technical allies like Cloudflare, might actually have a chance to push back.
And so begins the difficult task of rallying a famously fragmented media industry around a common cause. And the clock is ticking: Consumer habits are changing fast. Every time someone asks “chat” about the news instead of visiting a publisher’s website, another human visitor gets replaced by an AI agent. SPUR may give publishers a chance to shape that system, but it is taking form with or without them. Once those rules harden, changing them will be much harder.
A version of this column first appeared in Fast Company.

Frequently Asked Questions
A coordinated publisher alliance would pursue collective action to standardize how AI companies access, license, and compensate publishers for online content. Rather than each publisher negotiating individually from a weak position, an alliance gives publishers collective bargaining leverage to establish fair terms for AI content use across the industry.
Individual publishers—even large ones—have limited leverage against major AI companies like OpenAI, Google, and Anthropic, which have vastly more resources. An AI company can exclude any single publisher from training data without significant impact to their systems. A collective alliance changes this by making it costly to exclude the entire group.
If a publisher alliance established licensing requirements for AI content access, AI search and answer engines would need to license content from alliance members, restrict how deeply they extract content from alliance sites, or face coordinated legal action. This could substantially reshape how tools like ChatGPT Browse, Perplexity, and Google AI Overviews function.
Publisher alliances that collectively set prices or terms for content licensing could face antitrust scrutiny in the US and EU, as coordinated pricing agreements between competitors raise competition law concerns. Any such alliance would need to be structured carefully with antitrust counsel to achieve collective leverage without triggering regulatory action.
Relevant models include collective rights organizations like ASCAP for music licensing, France’s neighboring rights laws that forced Google to negotiate with news publishers, Australia’s News Media Bargaining Code that produced publisher-platform deals, and the Content Authenticity Initiative. These provide frameworks that a global publisher AI content alliance could adapt.






