I roll my eyes every time someone obsequiously mentiones the potential for deepfakes to “do harm”, which has been causing me eye strain lately.
The whole concept of a “deepfake” is bankrupt; it’s just a lie in a new format. We put video on a pedestal, and now that the overwhelming wave of automation has knocked it down1, we need to stop wringing our hands and accept the new reality. Video can’t be trusted. Society has to adapt — without leaning on censorship.
We are strong enough to survive a Google Docs that doesn’t interfere with text generation, and a Photoshop that doesn’t interfere with still image generation, and goddammit even an OpenAI Sora or Google Veo that doesn’t interfere with video generation. Video isn’t special.
The sharp edge of this is ugly, but there’s nothing new about that. The sharp edge of text and still image generation was already ugly. The legal and social tools we have to deal with vile speech don’t need to be changed to prevent it. Instead, we should change them to revert video to the mean: treat it just like they already treat text and still images. Video is now no better than rumor. It shouldn’t hold up in court of law or court of public opinion any more.
The alternative is to keep video on the pedestal, and prohibit lying by policy. This is tempting because it is a direct representation of what we really want: for lies to be transparent, to be able to trust whatever we read or hear or watch. But it’s stupid because it’s impossible. You can’t prohibit lies, and we wouldn’t want to live in a society that could.
Counterintuitively, when we accept that, we’ll provide better for those wounded by all the sharp edges.
-
An utterly predictable outcome, discussed speculatively for decades. ↩︎