Understanding the Deepfake Problem

Credit: DALL-E

I’ve been thinking a lot about deepfakes lately. That’s probably because it was one of the main topics I spoke about last week at CoinDesk’s Consensus conference. In a session called From Taylor Swift to the 2024 Election: Deepfakes vs. Truth, I explored why the deepfake problem is so vexing in today’s media environment, dissecting the issue alongside many of the people working on it.

The reason there is no easy way to deal with the problem is because it’s really multiple problems, each with its own variables and solutions. But we shouldn’t shy away from the complexity of the deepfake issue, since broad-spectrum remedies can have secondary effects that may be just as bad.

More on that in a minute, but first I’d like to spend a second introducing you to some of our hand-picked affiliate partners. They’re all great services, and we encourage supporting them if you have a need (we may earn a commission if you click on one of our links).

Incogni is a personal data removal service that scrubs your personal information from the web. Get 55% off with the code COPILOT.

Frase.io is a purpose-built AI writer that crafts SEO-focused articles, with detailed guidance on keywords and how to rank higher than competitors

Surfshark is a budget-friendly VPN with all the perks, highly ranked by PCMag and TechRadar. Use our link to save 86%, plus get 3 months free.

Deepfakes — images or audio that are simulations of a real person, usually a celebrity or politician, or a news event — have been part of our information ecosystem for years. The term officially became part of our online vocabulary in the late 2010s as face-swapping software started popping up in various apps like Snapchat.

Most early deepfakes were crude and fairly easy to spot. To create convincing deepfakes required quite a bit of effort and specialized knowledge. With generative AI, however, that’s no longer the case: pretty much anyone can create synthetic images of real people from a text prompt.

I’ll stop here to clarify: this isn’t necessarily always a bad thing. Certainly, where the subject has consented, it’s incredibly useful to be able to create simulations of a person’s image or voice. And even outside of that consent bubble, free expression (many deepfakes are satirical in nature) needs to be protected. 

Of course, there are no end to examples of deepfakery used for nefarious purposes. From the Biden robocalls to the fake attack on the Pentagon to the violative images of Taylor Swift that were widely distributed online earlier this year, there are a litany of incidents that, taken together, cry out for some kind of solution.

Generally, we want an information ecosystem that discourages and disincentivizes bad information and violative acts. At the same time, it should protect the right of free expression and allow the sharing of imagery that entertains or satirizes subjects. But since the boundaries between those things often aren’t the same for everyone, we need a way of dealing with the deepfake issue that provides the right tools for participants in the system to make their own judgments.

The Two Types of Harmful Deepfakes

With regard to deepfakes that are harmful, almost all of them fall into one of two buckets:


Read more

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.