Pushing Back Against ‘Gray Slime’

Credit: DALL-E

As generative AI reduces the cost of spinning up content sites to zero, it seems inevitable that the mountain of bot-written crap out there in internet land will soon be so large that human-created works will never escape its shadow. And while that’s probably true, there’s still hope that we can at least declare some of the mountain off-limits from search engines.

Case in point: Google’s March 2024 Core Update to search, released earlier this month, specifically targets poor-quality results and spam. And, according to this article in Search Engine Journal, it’s already having an effect: Google has deindexed — completely removed from search results — many sites that rely either entirely or mostly on AI-generated content.

Citing two independent studies, the piece reveals about 2% of sites have been wiped from Google, essentially a death sentence with respect to SEO. While 2% may not sound a lot, it amounts to an erasure of thousands of websites and a collective loss of millions of dollars in potential search and commerce revenue.

The update sends a strong message for the likes of the Serbian DJ profiled recently in Wired: an internet entrepreneur who made a career buying up several URLs with SEO clout — including the long-dead women’s indie blog, The Hairpin — and turning them into AI-powered content farms. Notably, the zombified version of the Hairpin no longer appears on Google.

For all the worry about AI-generated content taking over the internet (which still may happen), this move suggests it may be a solvable problem, and that the market will revert to some kind of equilibrium. Even though Google is a major player in generative AI, it still makes virtually all its money via search, and has a very clear interest in keeping that page of blue links useful. While its stance toward AI authorship overall remains agnostic, it’s aggressively moving against “unhelpful” content. The term may be vaguely defined, but we all know it when we see it.

As David Caswell, a builder of AI tools for newsrooms, told us in a recent podcast, as we move toward an AI-mediated ecosystem, we’re going to need better filters. That’s not just to protect ourselves against the ersatz Hairpins of the world and other “gray slime,” also but for training the AI models of the future. (It turns out training an AI with AI-generated content is generally a bad idea.)

While this is far from the last word in where AI-generated content resides in our media ecosystem, for now, at least, Google is punishing those who would use it for evil. That might actually give the rest of us a chance to figure out how to use AI for good.

Subscribe now

Speaking of using AI for good, to do so properly means we need a new set of rules for content creation — an AI writing manifesto. John Biggs and I are working on exactly that: It will contain a series of sections about how to ethically use AI, AI-generated art, and what AI companies need to remember when scouring the Internet for data. and we need your help to make it happen.

We need your help to make it happen. If you’d like to add a thought or two, please head over to this open Google Doc. It can me an immutable law, a rant, or even how you use AI in your own writing. Imagine you’re going to use this to teach future journalists and writers how to do their jobs in this changing environment.

Ready to start using AI like a pro?


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.