What the Creative Backlash Against AI Means for Journalism

Probably not coming to a newsroom near you. (Credit: DALL-E)

Are we in a full AI backlash? You’d be forgiven for thinking that, especially in the wake of Ted Chiang’s blockbuster New Yorker piece that dives deep into the perspective that AI is not so special after all. That follows a Gallup poll that shows Americans are twice as likely to believe AI will do more harm than good.

More thoughts on what Chiang’s piece means in a minute. But I also want to remind anyone just back from vacation and wanting to hit the ground running by learning how to use AI that my next AI Fundamentals class is tomorrow (Sept. 4) at 1 p.m. There’s still time to get one of the last remaining spots. Click below to learn more and sign up.

Start learning

Also, don’t forget to subscribe to The Media Copilot podcast. I’ve got a great guest lineup this fall, and I’m looking forward to getting into the weeds with some newsroom leaders, media executives, and builders on the AI-media ecosystem that’s taking shape. We kick things off this Friday with the team at Rolli, which, if you’re a journalist, should definitely be on your list of essential resources.

Now let’s pay a bill, and then I’ll give my thoughts on what The New Yorker piece means for AI in journalism.

Keep Your SSN Off The Dark Web

Every day, data brokers profit from your sensitive info — phone number, DOB, SSN — selling it to the highest bidder. And who’s buying it? Best case: companies target you with ads. Worst case: scammers and identity thieves.

It’s time you check out Incogni. It scrubs your personal data from the web, confronting the world’s data brokers on your behalf. And unlike other services, Incogni helps remove your sensitive information from all broker types, including those tricky People Search Sites.

Help protect yourself from identity theft, spam calls, and health insurers raising your rates. Plus, just for The Media Copilot readers: Get 55% off Incogni using code COPILOT.

How it works

A Scathing Takedown of AI ‘Creativity’

I did not expect, over Labor Day weekend, to be gobsmacked by one of the most insightful essays on generative AI, and what it means for creativity. But there it was, all over my X feed on Sunday morning — a New Yorker column from author Ted Chiang entitled “Why AI Isn’t Going to Make Art,” which throws a frigid bucket of ice water onto the idea that generative systems will ever be capable of real art.

The essay garnered tons of praise over the weekend, which it certainly deserves. Chiang’s rhetorical darts hit bull’s-eye after bull’s-eye on the list of “doubts we’ve had about genAI but couldn’t quite put our finger on.” Among them:

The inherent blandness of most AI writing: “[AI takes] an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible.”

Why the Google ad from the Olympics felt wrong: “No one expects a child’s fan letter to an athlete to be extraordinary… The significance… comes from its being heartfelt rather than from its being eloquent.

AI as a work product ouroboros: “Someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”

Chiang talks about art generically in the piece, though he focuses mostly on what large language models (LLMs) mean for writing. It makes sense — it was ChatGPT, not DALL-E or Midjourney — that captured the public imagination about what AI could do, and considering how much work is done with, well, words, LLMs obviously have the most potential to transform our economy and, more fundamentally, our expectation for how work gets done.

Subscribe now

Journalism as Art

With respect to journalism, two sections in particular really resonated with me: First, his description of art as the product of a series of choices. Professional writers and reporters think hard about the words they choose, the order of facts, the context they provide. While an AI can help make some of those choices for you, it’s clear that the final product will be more “average” the more you do it.

Second is Chiang’s best point — that language ultimately functions as a form of communication. And since LLMs have no intention to communicate, the simulated writing they create has little value without that intent. Ultimately, this is his way of saying AI doesn’t “think,” which we know, but AI’s ability to simulate human expression encourages us to ascribe human will to that expression. That doesn’t exist; as I teach in my classes on AI, its goal is merely to give you something that satisfies your prompt. There’s no spark there.

It’s tempting to interpret Chiang’s essay as a call to turn away from generative AI in its entirety. But I don’t think that’s the point, and it’s an unattainable goal anyway: With more than 70% of businesses using the technology already, it would be hard to reverse the engines of progress at this point.

I’d instead point to Chiang’s persuasive argument as a filter — one for finding the right ways to use generative AI. If it all comes down to choices, and I agree it does, how can use of GenAI optimize the number of creative choices for the human, while also enabling them to capture some productivity gain?

This is a tricky thing, and it involves lots of sub-questions: Are some of those choices “lower-level” than others? Is it possible to offload certain lower-level choices to a generative system, at least some of the time? How much does the person’s experience alter that equation?

Different types of work will have different answers for those questions. And I’m fully aware that creating a hierarchy of choices begins to slide down the slippery slope of thinking that art is “all inspiration and no perspiration,” as Chiang says. But it’s equally true that certain choices in the creation of a work can be offloaded with little creativity lost. When I write I don’t design or even choose fonts, for example, but I’d argue that’s borderline irrelevant to the actual writing.

Can AI Use Be Disciplined?

For journalism, targeting the “low-level” choices typically directs the use of generative AI to the two ends of the story pipeline: the story generation stage (story ideas, research, etc.) and the story finishing/distribution stage (proofreading, headlines, social copy, etc.). For most newsrooms, the middle part — the actual writing of the story — is still a “no GenAI” zone. 

It may not always be this way, but I again side with Chiang on his prediction that an AI that truly capable of anything a human can do — what many refer to as AGI — is likely a long way off. A robot reporter simply won’t work: Since AI has no intent to communicate anything, it’s difficult to imagine one persuading a source to tell it something interesting, even if it were capable of sending emails and making phone calls.

Chiang’s essay provides needed reassurance to journalists and other creatives looking to put AI in its place. AI, he says, “is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.”

That’s only true if creative professionals use it in a way that net reduces the amount of decisions they make. If instead their use involves methodical targeting of choices that they know are often rote, and at least somewhat removed from the final work — and there is human review of what the AI chooses — there is a possibility of an increase in creative output with the same amount of human-driven choices.

In other words, disciplined use of AI, especially in a fact-based profession like journalism, can unlock benefits of efficiency and productivity while preserving creative “agony.” We know that art is struggle, but not all struggle in pursuit of that art is equal.

The Media Copilot is a reader-supported publication. To receive new posts and support The Media Copilot, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.