Battle Lines

Credit: DALL-E

It’s getting tense out there.

For starters, there’s a growing skepticism about the real utility of AI. More headlines are throwing cold water on the idea that AI will “10x your productivity” or whatever, and this seems to be reflected in slowing investment in the industry. All the while, tech companies are still fiercely competing in the race to build the best AI, despite a lack of clarity if the billions they’re spending to do so will ever truly pay off.

It’s amid this backdrop that there was a notable flare-up in the ongoing cold war between the news industry and Big Tech when Google announced it was testing a version of its search results in California that didn’t include links to news stories.

This was in response to a new law being considered in the state that would effectively put a tax on links to news. For every link to a news story that shows up in search, Google would have to pay a fractional fee, the logic being that the search giant benefits from the existence of that news and has paid nothing to get it. Google, of course, argues that such a fee makes no sense since it’s the publisher that benefits from the traffic Google sends to their site.

If it feels like you’ve seen this movie before, you have: Similar laws have been tried in both Canada and Australia. In both regions, when Meta removed news links from its networks, Facebook, Instagram and WhatsApp, it was devastating to publishers.

To be clear, this isn’t the law yet in California, and Google’s action was only a test. But it was also a message — the company published a very public blog post about what it was doing, lambasting the proposed law and suggesting that it would benefit media conglomerates and hedge funds instead of small publications.

You can put aside the irony of a $2 trillion-dollar company standing up for the little guy for a minute, because Google clearly did, too: The company said it would be hitting pause on any California-based future investments in the Google News Initiative, which provides smaller newsrooms with money and free access to AI-powered tools for news gathering and production. 

So to discourage legislators from helping save the news industry, Google is squeezing that industry even further.

You really couldn’t ask for better evidence of Google’s massive conflict of interest when it comes to its involvement in news. A massive concern in the media is generative search — when an AI chatbot just summarizes news for you, the very traffic that Google cites as the benefit of its service will be cut off from news publishers. To what extent we don’t know yet, but the day is undoubtedly coming (check the Otherweb story below for an example of how this would work).

So Google is simultaneously managing a program to provide AI-powered tools to publishers while using that same AI technology to decimate the links the news industry relies on with generative search. Certainly, news media needs all the help it can get given the state of the industry, but this development should give any newsroom pause before accepting Google’s “help.” However good its tools are (and there’s definitely reason to be skeptical on that score, too), Google will always put its own priorities first, and right now that means asserting dominance in AI.

On the question of providing search to news — and the lack thereof in generative experiences — this bill won’t solve that problem. But it could be part of the way we get to a solution that won’t be entirely dedicated by Big Tech.

Subscribe now

More AI+Media Stories:

Meta Catches Up: Since pivoting from the metaverse, Mark Zuckerberg has made it his company’s mission to create the leading AI model — or at least one on par with OpenAI and the other major players. With Llama 3, he seems confident that Meta has done just that: Meta isn’t just launching a ChatGPT clone with meta.ai, but it’s also integrating its AI tech directly into the search bars on Facebook, Instagram, and WhatsApp. As everybody rushed to try it out, I realized that this time we’ve heard barely a whisper from critics trying to play “gotcha” by getting Llama to generate inappropriate images, which was all over X when Llama 2 launched. As I’ve said before, this layer of AI safety is better dealt with downstream from the LLMs, so I’m regarding the lack of meltdowns over people doing juvenile things with chatbots as a win for progress.

Not So Stable: More signs that the AI hype bubble is shrinking if not quite popping just yet: Stability AI just laid off 10% of its workers. This comes in the wake of the CEO abruptly exiting after serious differences of opinion with his senior staff and investors. Lest we forget Stability also saw a senior VP leave the company last fall over the company’s approach to copyright. Stable Diffusion, the company’s image generator, was an early darling of the current AI boom, but with the company embroiled in chaos and lawsuits (Getty sued the company over training its model on its image library), it’s hard to see a comeback in the works — especially since Midjourney, OpenAI’s DALL-E, and Gemini are generally regarded as easier, better options.

A Chatbot for News: What does good AI-powered news look like? We recently spoke to Alex Fink, the CEO of AI startup Otherweb, about exactly that. Now his company’s news aggregation app, which gives each story a “nutrition label” based on its content, to a new level: It’s launching a  “news concierge” that will summarize stories on the fly. While general-purpose chatbots like ChatGPT or Copilot often miss the mark when you ask for recent news (if they give it at all), Otherweb’s responses are comparatively helpful. When I asked my concierge “Marty” to tell me what Meta announced this week, it focused on Llama 3 and the Meta Quest, not board appointments from two months ago (what ChatGPT told me). And yes, it included links to the sources. Otherweb is a good example of how to create a generative search experience around news — now all we need is a business model.

Watermarking Makes a Splash: The idea of labeling AI-generated images with a watermark is catching on. Snap said this week it would add watermarks to images created with Snapchat’s AI tools, and Meta’s new Llama 3-powered chatbot also tags its images with a tiny logo that says “Imagined with AI.” This might be why we haven’t seen as many freakouts about inappropriate images (see above) lately. Watermarks are generally frowned upon for most professional uses, so the move will likely drive users to services that don’t include it and drive demand for either an option to remove it (possibly by paying more). There’s also unquestionably a stigma attached to synthetic content, and it’ll be interesting to see if use of AI images drops as watermarks become more standard.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.