How Google’s Catastrophic AI Overviews May Actually Help News Media

Credit: DALL-E

Google, you’ve done it again.

In the 2000s, that sentence would have conveyed yet another win for Google creating a cloud product, like Gmail or Google Docs, that the online world quickly finds indispensable. Today, it means the emperor of search has yet again discovered it’s not wearing clothes.

More on that in a second, but I’ve got some great news about what we’re doing at The Media Copilot if you work in PR:

For one time only, our upcoming AI Fundamentals class is going to be focused exclusively on PR work. We’ve 100% revised the class to take into account all the advances in prompting, apps, and techniques from the last few months. We’ve also curated a custom toolset of apps specifically crafted to speed up PR writing, pitching, and creating on-brand media (think: articles, images and videos).

The class is happening June 20, and spots are limited. If you reserve yours now and use the code AISPRING at checkout you can get a 50% discount. If you’re new to AI, or if you’ve plateaued in your use of it in your work, this class will take you to the next level, and give you knowledge that you can bring to your colleagues. Like I said, this is happening one time only, so don’t wait!

Sign Up

Just a couple of months after Google released an AI model that somehow thought it was a good idea to create offensively inaccurate images of “diverse” Nazi soldiers, Google launched AI Overviews (née search generative experience, or SGE) into the mainstream. These summaries — authored by Gemini, the same model that created those bizarre images — at the top of search results would ostensibly answer user queries with the answer they were looking for, negating the need to click through a bunch of links.

The only problem: Those summaries are sometimes totally, comically, and dangerously wrong. You don’t need to search very long on X or LinkedIn for examples of Google’s AI Overviews advising people to keep cheese from sliding off pizza is by using glue, that rocks are nutritious, and even suggesting suicide as a way of dealing with depression (yikes).

Why This Is Serious

Hold on, you might say: those same “gotcha” answers have been cited in a whole bunch of places, and while they’re objectively not good, is this a case of overhyping cherry-picked examples? It’s a fair question, and the answer unlocks why Google’s search fiasco may actually be good for news media in the long run, since it’s one of the clearest signals of the value of human-driven information retrieval — or what a non-engineer might call journalism.

The premise of the question — “Is this really a big deal if it gets most of the answers right?” — ignores the role that Google plays in our information ecosystem today. Google search has largely earned its place as the default way to find things online by creating a good product. It’s not perfect, and it was declining in quality even before AI Overviews, but generally it was still perceived to be the most reliable path for people to find what they were looking for.

Because Google had held this status for so long, there was trust. Not trust in how it handled data (a whole separate issue), but trust that what it showed you was a match for your search. It leveraged this trust in many ways, one of them being modules in search that pointed you to answers, which predate Gemini, and have been a thing for almost a decade at this point.

Those answers sometimes suffered from being inaccurate and out of date, but not in ridiculous ways that a human would see as obviously false. Most people treat these “zero-click search” answers as the answer they were looking for, but if you over-relied on them, you would risk becoming part of the TikTok subgenre of debunking lazy research. Google never wholly fixed the problem, but since the answers at least sounded accurate, its overall trust with the general public didn’t suffer.

Subscribe now

Google ≠ Perplexity

But the insanity of AI Overviews changes the equation. The bad answers are so over-the-top wrong that, for everyday people, one of two things will happen:

They’ll trust Google less — good for them, bad for Google.

They’ll actually rely on potentially dangerous information, which could lead to real harm – bad for everyone.

This same equation doesn’t apply to, say, Perplexity, which is building its own AI-driven Google competitor — and suffers from similar hallucinations. For starters, as a startup, Perplexity doesn’t yet have the broad, mainstream trust that Google has. But it’s also a different use case: The early adopters of Perplexity generally use it to accelerate deeper research on a topic, not to simply get quick answers on casual searches. Because of that, there’s an inherent “check” on the output it gives you.

That may change as Perplexity gets more popular, but Google IS popular, and hugely so. Because of its position in the market, it can’t behave the same way, and by releasing AI Overviews without enough checks against bad information, the public’s trust in Google search will suffer.

The Value of News Media in an AI World

This whole fiasco points the way to what news media has to offer in an AI-mediated world. Another clue is that many of the bad answers in AI Overviews can be traced back to Reddit. A lot has been made of Reddit in the current digital ecosystem: because it has a massive reservoir of content that’s been built up over the years, the large language models that rely on human-generated data see it as very valuable.

The problem is, just because something was written by a human doesn’t make it accurate. The AI Overview that pointed to glue as a great solution for your cheese-sliding-off-pizza problem relied on a Reddit post that was clearly a joke. This is evidence that AI models don’t have a good way to verify what’s accurate and what’s not in a large corpus of unstructured data.

You know who does? Newsrooms. Certainly, trust in the media is at an all-time low. I could spend weeks unpacking that, and all the partisan forces pushing and pulling on the issue, but it’s a different kind of trust that Google is harming with its desperate moves to dominate AI. Generally, people still understand that newsrooms are in the business of checking facts and typically don’t just make stuff up.

In short, AI Overviews are a disaster, and uncork a demand not just for human content, but verified human content. There’s an opportunity here for the media to play a big role, and with ChatGPT Search on its way, we may start to see how newsrooms and journalists begin filling the trust gap in an information ecosystem where the world’s search engine tells you, with a straight face, that you can safely stare at the sun for 30 minutes.

The Media Copilot is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.