Fighting Back Againt Deepfakes

The Lightwave

Practical Insights for Skeptics & Users Alike…in (Roughly) Two Minutes or Less

“Reality is that which, when you stop believing in it, doesn't go away.”

-Philip K Dick

Fighting Back Against Deepfakes

Last week, we looked a bit into Generative Adversarial Networks (GANs), a type of machine learning model that consists of two neural networks working in opposition to each other:

  • The Generator tries to create increasingly realistic fake data to fool the Discriminator.

  • The Discriminator gets better at identifying fake data produced by the Generator.

Today, we’re going to look at some ways researchers are trying to better identify AI-generated content, specifically images. From harmless face-swaps to deepfakes, our digital likenesses can be altered without our consent—and this is only becoming easier. Imagine waking up to find your face plastered across the internet in compromising situations you were never actually in?It's a scary thought, right? Especially given that headlines often travel much, much faster than truth.

Two of the Latest Tools

In the battle against deepfakes, two groundbreaking tools have emerged:

PhotoGuard:

Developed by MIT researchers, this clever tool adds an invisible layer of protection to your photos—a kind of digital forcefield that confuses AI systems without changing how the image looks to human eyes.

There are two methods:

  • Encoder attack: This tricks the AI into “thinking” the image is something completely different, like a gray blob

  • Diffusion attack: This method disrupts how AI generates new images based on the protected photo

Glaze:

Created at the University of Chicago, Glaze is designed to protect artists' unique styles from being copied by AI. It's like giving your artwork a secret handshake that only humans can understand.

How Do They Work?

Essentially, both tools work by making subtle changes to images that throw AI systems off track. When an AI tries to manipulate a protected image, the result is a warped, unrealistic mess.

They add a layer of "noise" that humans can't see but that interferes with AI systems' ability to accurately process or manipulate the images. It's similar to how a thin, transparent film might protect a physical photograph from damage - you can still see the photo clearly, but it's harder to alter.

Of course, these are not perfect solutions...and in technology, whenever one solution is discovered, a workaround is sure to follow shortly thereafter.

In the cases of PhotoGuard and Glaze, they need wider adoption by tech companies to really make a difference.

How?

Well, let’s say that every photo you uploaded to social media was automatically protected by PhotoGuard, or if every piece of digital art came with a Glaze shield. There's also the challenge of screenshots. Even protected images can be manipulated if someone takes a screenshot.

And there’s also the challenge of trust: Who (what tech companies) get to be the arbiters of real versus non-real?

Tomorrow we will look at some specific ways deepfakes are being used to wreak havoc.