Deepfakes arent very good—nor are the tools to detect them

Enlarge / A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.The Washington Post | Elyse Samuels/The Washington Post

We're lucky that deepfake videos arent a big problem yet. The best deepfake detector to emerge from a major Facebook-led effort to combat the altered videos would only catch about two-thirds of them.

In September, as speculation about the danger of deepfakes grew, Facebook challenged artificial intelligence wizards to develop techniques for detecting deepfake videos. In January, the company also banned deepfakes used to spread misinformation.

Facebooks Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. It provided a vast collection of face-swap videos: 100,000 deepfake clips, created by Facebook using paid actors, on which entrants tested their detection algorithms. The project attracted more than 2,000 participants from industry and academia, and it generated more than 35,000 deepfake detection models.

The best model to emerge from the contest detected deepfakes from Facebooks collection just over 82 percent of the time. But when that algorithm was tested against a set of previously unseen deepfakes, its performance dropped to a little over 65 percent.

“Its all fine and good for helping human moderators, but it's obviously not even close to the level of accuracy that you need,” says Hany Farid, a professor at UC Berkeley and an authority on digital forensics, who is familiar with the Facebook-led project. “You need to make mistakes on the order of one in a billion, something like that.”

Deepfakes use artificial intelligence to digitally graft a persons face onto someone else, making it seem as if that person did and said things they never did. For now, most deepfakes are bizarre and amusing; a few have appeared in clever advertisements.

The worry is that deepfakes might someday become a particularly powerful and potent weapon for political misinformation, hate speech, or harassment, spreading virally on platforms such as Facebook. The bar for making deepfakes is worryingly low, with simple point-and-click programs built on top of AI algorithms already freely available.


“I was pretty personally frustrated with how much time and energy smart researchers were putting into making better deepfakes,” says Mike Schroepfer, Facebooks chief technology officer. He says the challenge aimed to encourage “broad industry focus on tools and technologies to help us detect these things, so that if they're being used in malicious ways we have scaled approaches to combat them.”

Schroepfer considers the results of the challenge impressive, given that entrants had only a few months. Deepfakes arent yet a big problem, but Schroepfer says its important to be ready in case they are weaponized. “I want to be really prepared for a lot of bad stuff that never happens rather than the other way around,” Schroepfer says.

The top-scoring algorithm from the deepfake challenge was written by Selim Seferbekov, a machine-learning engineer at Mapbox, who is in Minsk, Belarus; he won $500,000. Seferbekov says he isnt particularly worried about deepfakes, for now.

“At the moment their malicious use is quite low, if any,” Seferbekov says. But he suspects that improved machine-learning approaches could change this. “They might have some impact in the future the same as the written fakeRead More – Source