Media Already Popped Its AI Bubble

Credit: Enrico Carcasci, Unsplash; modified by Photoshop Generative Fill

When the stock market began its most recent tumble, pundits got out their historical dartboards and compared the sudden downturn to Black Monday from 1987, the dot-com bubble burst of the early 2000s, and the Great Recession of 2008. They may have a hard time deciding which financial calamity it most resembles, but there seems to be agreement on who to blame: AI.

The idea that AI is a bubble is hardly new — people have been wondering when the bubble would burst since at least last summer. Even I got in on the action a few weeks ago in a LinkedIn Live with Dmitry Shapiro, CEO of AI platform (and Media Copilot partner) MindStudio.

Generally, the narrative goes like this: Investors and the big tech companies have poured hundreds of billions of dollars into AI since the debut of ChatGPT. But so far this big bet on the future hasn’t paid off financially: revenue generated by AI for the major tech companies so far hasn’t come close to matching expenses. And the quiet chatter of CIOs lamenting that they still haven’t found a “killer app” for AI is getting too loud to ignore.

So, sure, the markets probably need a reality check. And the reality of AI is that it was never going to deliver on all the exaggerated promises of “10x”-ing everyone’s productivity, ushering in a world where everyone just offloads most of their work to a digital assistant. At least not for a few years yet.

If I find solace in any of this bad news, it’s that AI’s reality check has largely already happened in the media industry. When ChatGPT captured the public’s imagination, the most obvious use case was writing. After the initial wave of curiosity, where it seemed every publication created at least one “sample” article, a few publications took the idea further and attempted to use AI to publish content at scale.

The results were largely a disaster. The infamous experiments with AI from CNET, MSN, and others are now cautionary tales about how not to use AI in a newsroom. That’s not to say there aren’t plenty of sites cranking out AI slop — there are — but legit operations have abandoned any fantasy that AI would produce quality content as easily as turning on a tap.

If I had to time it, I’d say Sports Illustrated’s debacle last fall over fake writers producing obviously synthetic articles was the media’s AI crash. Since then, however, the industry has taken an approach of cautious but steady adoption. And that’s not just me saying that — a recent study from the University of Chicago put journalists as one of the professions using AI the most at work, at 64%, second only to marketing.

Journalists were one of the top professions for AI adoption, according to a study from the Becker Friedman Institute for Economics at the University of Chicago.

In newsrooms, this is lived in all kinds of ways: Tools for creating end-of-the-pipe content like social copy and SEO headlines are getting more common by the day. Major newspapers are offering audio versions of articles, read by synthetic voices. And journalists everywhere use AI as both a research assistant and editing buddy — use cases where AI’s propensity to hallucinate is either insulated from published copy or not a problem in the first place. In applications where AI does act as a kind of co-author, it’s only under strict supervision.

So while the broader AI industry may be entering the trough of disillusionment on the Gartner hype cycle, the media industry is climbing nicely along the slope of enlightenment. The question is: will the trough that AI is slipping into be so deep that it’ll drag everything else down with it?

Subscribe now

Upgrade your AI skills this summer!

There are still spots for The Media Copilot’s upcoming AI training classes! Reserve yours today for this 100% live and interactive way to gain valuable AI skills, tailored to media work, for you and your team. Contact us at team@mediacopilot.ai for group rates.

AI Quick Start (Aug. 22): This 1-hour class costs just $80 and is a lightning-fast crash course on the basics of AI, giving you everything you need to know about prompting methodically to turn AI into your personal assistant.

Learn more: AI Quick Start (1 hour)

AI Fundamentals (Sept. 4): In 3 hours, our AI Fundamentals class is loaded with training on specific prompting for media roles, full guidance on using image generators effectively and ethically, and a curated set of tools that are guaranteed to save you time in your day-to-day work. Normally priced at $750, you can reserve one of the limited spots for a 50% discount until Aug. 23 with the discount code AIHEAT at checkout.

Learn more: AI Fundamentals (3 hours)

The Chatbox

All the AI news that matters to media

Deals, Deals, Deals: So this AI licensing thing is catching on. Several major publishers — including the Financial Times, Axel Springer, The Atlantic, and Fortune all signed deals to license their content to ProRata.ai, an heretofore unknown AI startup. According to Axios, who broke the story, ProRata intends to build its own platform for surfacing that content, which would presumably compete with the likes of Perplexity and OpenAI’s SearchGPT.

Best of luck to them, though the company’s $25 million Series A round and impressive executive suite (which boasts franchise players from both tech and media) suggest this isn’t a shot in the dark. But this deal emphasizes two things: Licensing revenue from AI platform companies and LLM providers could be a bigger line item on media balance sheets than was previously thought. And second, the content-provenance-and-compensation ecosystem (can someone come up with a better name?), which includes tech protocols like C2PA and other solutions startups like TollBit, is getting complicated… and competitive.

OpenAI is losing it: Which is to say “it” is more of its leadership. Per The Information, President Greg Brockman is taking a leave of absence until next year, and the company’s VP of consumer product, Peter Deng, also left. But probably most notable is John Schulman, who was one of the company’s 11 co-founders and is leaving OpenAI for rival Anthropic so he can deepen his focus on AI “alignment,” which is ensuring the behavior of AI aligns with human values.

Judging from the X back and forth, Schulman’s departure was amicable, but like Kevin Systrom said about leaving Instagram, “No one ever leaves a job because everything’s awesome.” Not to mention this comes a few short months after another top leader, Jan Leike, left because of concerns about how OpenAI was treating alignment and AI safety more broadly. It’s hard not to see a pattern here.

ChatGPT, should I ship this? Despite the ongoing brain drain, the team at OpenAI continues to ship new features. Hot on the heels of SearchGPT, the company has activated (for a few select users) the advanced voice mode that it showed off in the spring, or about 87 new cycles ago. More interesting, though, is a product OpenAI decided not to ship: its own AI detector that can reveal when text is written by ChatGPT, according to TechCrunch. College professors should keep the champagne on ice for the moment, since OpenAI is internally debating the wisdom of releasing a tool that would certainly add to the stigma of AI writing. As I explored in my podcast discussion with Lee Gaul, simply detecting AI-written text doesn’t necessarily mean the text is necessarily bad, wrong, or even unethical. It’s just a signal, and without guidance on what to do when that signal is present, it could lead to some unnecessarily harsh results.

The Media Copilot is a reader-supported publication. To receive new posts and support us, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.