Can Search Engines Really Be ‘Agnostic’ About AI Content?

Credit: DALL-E

As we all wait with bated breath to see what Apple has to say about AI at WWDC later today, evidence continues to pile up showing how much our information ecosystem isn’t ready for the glut of AI content.

The New York Times reported that the site BNN Breaking (not to be confused with Canada’s BNN, a business-focused news channel), which painted itself as a global news organization with reporters all over the globe, was in fact an “AI chop shop” — a content operation that employed mostly low-paid writers who used AI to enable the site to crank out massive amounts of aggregated articles.

It’s a pretty scandalous story and one that points to weaknesses in how certain platforms currently deal with AI-fueled content. Before I get into the specifics, stories like this show just how important it is to be AI-literate in 2024. Understanding how AI is used in content creation is now an essential media skill, and if you haven’t taken steps in that direction, it’s not too late: The Media Copilot’s next class on AI Basics is happening in just a couple of days.

Beginning AI for Journalists, Marketers, and PR Pros is a 1-hour crash course that will not just teach you skills for using AI to speed up creative work but also a strong set of foundational principles so you can get the benefits of AI assistance where it makes sense and avoid going down the dark path toward “spammy” content like BNN. And for the next few hours, you can take advantage of our AISPRING discount code to get the class for a mere $40. You can start your AI journey by signing up here.

Start Learning AI

The case of BNN Breaking is another alarm bell that shows the informational pipes we rely on to pump good information to us when we go looking for it aren’t yet equipped to deal with the massive influx of AI-generated or -augmented content that is clearly happening. Put simply, we need better filters, but the BNN story helps us pinpoint exactly where we need them.

Google can’t be “agnostic” any longer. Google’s stance has been to not pass judgment on an article because it’s AI-written; after all, good information is good information, regardless of whether a human or AI wrote it. This position feels generally correct on principle, but it fails to take into account reality.

The fact is, while not all AI content is spam, virtually all spam today uses AI content. When I recently spoke to Originality.ai founder Jon Gillham, he confirmed this. So while a draconian ban on AI content from search results would be too harsh (and utterly unrealistic, given Google is heavily invested in the success of Gemini, its AI model), search engines should set a higher quality bar for sites that primarily use AI content. We shouldn’t have to wait for The New York Times or Wired to investigate a site before it disappears from Google.

Human auditing is needed for syndication. One of the reasons that BNN has attracted so much attention is because its content was syndicated to MSN, a popular news aggregator operated by Microsoft (disclosure: Media Copilot staff have done consulting work for Microsoft). Recently, MSN has been taking human editors out of the picture and relying more on AI to curate the sources and articles on the site.

BNN Breaking isn’t the first AI dung pile Microsoft has stepped in. Last year, some incomprehensible sports write-ups leaked through as well. Given the scale at which MSN operates, it would be unrealistic for humans to edit every article on the site, but it seems clear there needs to be some extra layer of human auditing at the source level. Real people should evaluate sites before they’re included, with regular follow-ups, to ensure AI “chop shops” don’t get mixed in.

Legit sites have moved on from scaling AI content. In the “good news” department, the silver lining for this story is that BNN Breaking wasn’t a well-known site, it was operated by Gurbaksh Chahal, a millionaire and “serial entrepreneur” with a poor reputation. When BNN disappeared from the internet, no one missed it.

The implication being: legit news sites appear to have mostly moved on from the idea of scaling up their content operations with low-quality AI articles. That said, that doesn’t mean there aren’t good ways to use generative AI to assist in the writing of certain news stories, and Semafor Signals appears to be an early example of a constructive approach.

Today, BNN Breaking is no more, but Chahal moved his AI content factory to another site, TrimFeed, which was also shut down as the Times report was imminent. But this can’t go on forever. At some point, the system needs to inoculate itself against bad actors who exploit AI to pump “gray slime” into our informational diet. Google and Microsoft aren’t going to slow down the push for AI, but they can both do more to promote the healthy version of it.

Don’t forget to check out our AI classes — both our 1-hour Beginning AI class happening June 12 and our 3-hour AI Fundamentals class on June 20. For the latter, we’re specifically tailoring the class materials for PR pros this month. Don’t wait! There’s only one more day to use code AISPRING at checkout for 50% off.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.