Is the Bloom Off the AI Rose?

Photo by ameenfahmy on Unsplash

Hype cycles are nothing new. Pete and I were in the middle of the last big hype cycle, crypto, and while that space has cooled off considerably, there is still a distinct possibility it will come roaring back, like a plague, or more charitably, a cold sore. But AI’s hype cycle started strong and is suddenly hitting a wall, a wall induced by cringeworthy interviews with OpenAI C-levels and, more interestingly, by people actually using the tech.

First, we have this long interview with Sam Altman, CEO of OpenAI, on the most recent Lex Fridman podcast. In it, he basically says ChatGPT is bad.

“I think it kind of sucks,” Altman said. “I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck, looking backward at them, and that’s how we make sure the future is better.”

What Altman is really saying is he knows that his product is producing actual garbage and he doesn’t want to be blamed. During the Dot-Com boom I assure you no web entrepreneur said, “Well, this weird GIF here is pretty primitive now but just wait until we can serve 4K video.” Instead, an entire generation worked around the limitations. Altman, on the other hand, says we have to live with them.

Then we have the disastrous interview with Mira Murati where the OpenAI CTO said that she wasn’t quite sure how the company trained its video models. She basically won’t admit to wholesale theft:

So first the rose is dying primarily because companies like OpenAI can’t get enough hardware to run their models and they are tempering expectations. Further, the news stories coming out about the energy usage are also alarming, but not as strident as the complaints about crypto. Those are functional issues, issues that could be solved with better tech and hardware.

But then there are the cultural issues. We are slowly learning that AI doesn’t really work without human intervention. Check out this thread about an AI generated cookbook:

Image via Twitter

The cookbook in question looks professional. Writing a cookbook has gotten easier: You can get a cover made for something like this on Fiverr and maybe you can hire someone to write the recipes. But wouldn’t it be cheaper to just AI-generate a woman’s face for the author photo and blast out some hot garbage? Absolutely.

Image via Twitter

Image via Twitter

Even the book description is crazy:

Just as though Grandma had sneaked into your home to surprise you with one of her famous recipes.
?️?️?️I bet you have been told: “Leave it you will never have those mouth-watering dishes like your granny”. Anybody who says this fluff just never tried to use a Crock Pot to cook ?️?️?️

And the reviews are *chef’s kiss*:

I recently purchased this crockpot recipe book, and I must say I’m satisfied with the variety of beef recipes. The beef chapter offers a good selection of dishes, from classics to slightly more unique options. The recipes are clear and easy to follow, even for beginners in slow cooking. I’ve tried some of the recipes, and I’ve been pleased with the results. Although some recipes require longer cooking times, the end result is worth the wait. Overall, it’s a good recipe book for those looking to experiment with beef.

What is really happening is that generative AI fans — like us — have discovered that their favorite tools are creating a real mess. Whereas the Media Copilot is focused on training journalists and marketers how to use these tools effectively, others are using these tools to create dreck. This isn’t new — the novel gave way to the penny dreadful, and VHS didn’t really improve much in the world of cinema, but it did wonders for porn — but at this point nothing is sacred. Hundreds of AI-generated books are storming Amazon as we speak. Deepfakes are confounding the gullible. Take Shrimp Jesus, for example. This odd character is popping up on Facebook and getting crazy traffic because, well, people are stupid. From 404:

What is happening, simply, is that hundreds of AI-generated spam pages are posting dozens of times a day and are being rewarded by Facebook’s recommendation algorithm. Because AI-generated spam works, increasingly outlandish things are going viral and are then being recommended to the people who interact with them. Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.” 

We were supposed to get flying cars and jetpacks. Instead we got new ways to separate our in-laws from their retirement savings. It’s frustrating, it’s stupid, and it’s now how any of this was supposed to work. But given the quality of minds running media and AI research these days, it’s no surprise that it’s happening.

So what can we do about the problems with AI? We need a new set of rules for content creation — an AI writing manifesto. We’re working on exactly that: It will contain a series of sections about how to ethically use AI, AI-generated art, and what AI companies need to remember when scouring the Internet for data. and we need your help to make it happen.

We need your help to make it happen. If you’d like to add a thought or two, please head over to this open Google Doc. It can me an immutable law, a rant, or even how you use AI in your own writing. Imagine you’re going to use this to teach future journalists and writers how to do their jobs in this changing environment.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.