Finally, Some Sanity in AI Discourse

If you’ve been hanging out on Twitter recently you might have noticed a post that has been making the rounds.

It’s entitled I Will Fucking Piledrive You If You Mention AI Again. And it’s a doozy.

The author, Nikhil Suresh, is a data scientist and has written some other exciting posts including I Will Fucking Dropkick You If You Use That Spreadsheet so there is plenty of precedent for his attitude. Essentially, Suresh goes through the current story in AI and adds a breath of fresh air or, more precisely, farts in its general direction.

See, Suresh was there at the beginning, before LLMs were a household word and we were all worried that a chatbot would replace entire newsrooms. Then ChatGPT came along:

And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper’s robes, stitched from corpulent greed and breathless credulity, spending half of the planet’s engineering efforts to add chatbot support to every application under the sun when half of the industry hasn’t worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business – not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.

In short, he’s saying that we can barely back up our own software correctly let along harness the magical power of AI. One of his more salient points is if AI becomes ihyperintelligent and decides we are disposable, then we won’t have too worry too much about our jobs. But if we Occam’s Razor this and find the likely scenario, most of use don’t need AI now any more than we did two years ago.

To wit:

Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has neverused a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course fucking catastrophe.

Suresh goes on to say that if your company can’t survive the Age of AI then your company can’t survive anything. This, in fact, is the real reason we’re gutting newsrooms and destroying jobs: because the bosses can’t figure out a sane money-making strategy for news. You can absolutely replace 60 people with a chatbot and to do so is very fashionable. But AI isn’t the real reason you’re replacing 60 people with a chatbot. You’re replacing those 60 people because you can’t afford them and the chatbot is a great excuse which allows you to produce slime without much adult supervision. And we all know what happens when people — namely kids — produce real slime without adult supervision.

AI is a tool. It works sometimes and sometimes it doesn’t. You don’t need a RAG for your blog or your news organization unless you truly know what you want. Adding a chatbot to your site is akin to adding blockchain to your company mission statement — misguided at best, malignant at worst. Absolutely you have to be ready for the coming onslaught of AI if only to have AI on your resume so an AI-powered HR director can see it and pull you out of the slush pile. It’s a valid technology.

But all of the use cases outside of “Write me an essay on themes in Hawthorne’s The Scarlet Letter because I forgot to do it for homework last night” require some careful thought. I’ll leave you with this final anecdote from Suresh:

An executive at an institution that provides students with important credentials, used to verify suitability for potentially lifesaving work and immigration law, asked me if I could detect students cheating. I was going to say “No, probably not”… but I had a suspicion, so I instead said “I might be able to, but I’d estimate that upwards of 50% of the students are currently cheating which would have some serious impacts on the bottom line as we’d have to suspend them. Should I still investigate?”

We haven’t spoken about it since.

AI can do amazing things. But do we want it to?

The Media Copilot is a reader-supported publication. To receive new posts and support The Media Copilot, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.