Deepfake can be fun when you superimpose your photo or a photo of a friend onto someone else and make it look like they’re talking. However, it’s not difficult to see the potential problem behind this, where people could easily manipulate photos and videos to spread misinformation. This is why tech companies are working on deepfake detectors.

However, it turns out that maybe these deepfake detectors aren’t very smart. According to researchers from UC San Diego, they have proven that detection tools can be easily fooled by inserting what is known as adversarial examples into deepfake photos or videos that would then cause the AI to trip up and make a mistake.

So how does this work? Basically how deepfake detectors work is by scanning the faces in videos and sending it to a neural network for analysis. However, by inserting an adversarial example, it will cause the neural network to think that they are looking at the person’s genuine face and will return the results that the video is authentic.

This isn’t to say that deepfake detectors are useless, but it is clear that more work needs to be done on them. According to the researchers, “To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses. We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.”

You can take a look at the video above in which to the human eye, it is obvious that the person in the video is not former US President Barack Obama, but to the computer, they think that he is.

Filed in General. Read more about . Source: engadget

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading