Facebook And Microsoft Launch Challenge To Detect Deepfake Videos

Technology to manipulate things is advancing faster than people's ability to tell what’s real from what’s not. And deepfakes are the most often created for bad intentions.

There are many attempts previously done in the past to detect and curb algorithmically-generated fabricated videos. Like one that uses blockchain, using a method that looks deep into individual pixels, and more. DARPA had also experimented with its own method.

While some do have prospects, the deepfake problem is bigger than any single entity to solve alone.

This is why Facebook has partnered with Microsoft, the Partnership on AI coalition and academics from several universities to create what it calls a 'Deepfake Detection Challenge', or DFDC. As an open competition, the goal is to spur innovation by focusing the world’s collective brainpower on a seemingly impossible goal: curbing deepfakes one and for all.

Facebook that has put $10 million to this project, challenges the community to create deepfakes detector and preventing people falling prey to misinformation. Enticing those eligible, DFDC has prizes and leaderboard.

According to Facebook's CTO, Mike Schroepfer, in a post announcing DFDC:

"'Deepfake' techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online. Yet the industry doesn't have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes."

"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer. "

Training AI algorithms to properly detect deepfakes and manipulated videos is difficult, as it requires a massive training materials. In this case, a huge data sets of deepfakes.

This is why the social giant is using paid and consenting actors to create a library of deepfake videos.

Schroepfer continued by saying that:

"It’s important to have data that is freely available for the community to use, with clearly consenting participants, and few restrictions on usage. That's why Facebook is commissioning a realistic data set that will use paid actors, with the required consent obtained, to contribute to the challenge. No Facebook user data will be used in this data set. "

No, not all deefakes are bad. But they are troubling for one big reason.

Deepfake algorithm was first introduced by a Reddit user, and since then, the technology has become a popular way to swap someone's face to someone else's body, most notably, celebrities with porn stars' bodies.

Deepfakes have also been used to impersonate high-profile political figures to do and say fictional things.

Spread through the internet, where many videos have lowered quality for convenience, and with the explosion of AI usage that made things cheaper to fake videos, malicious actors are benefiting from these facts.

And with deepfakes getting even better as researches on the technology continues, deepfakes' quality and trickery are progressing at an unprecedented pace, outsmarting people's capabilities to tell apart the real from the fake.

Because the technology has created a whole new level of persuasion, people should always question the legitimacy of information (videos) they see on the internet.

Given the lack of a robust solution to curb deepfakes, Facebook's DFDC is doubtless a promising step in the right direction.

"This is a constantly evolving problem, much like spam or other adversarial challenges," said Schroepfer, "and our hope is that by helping the industry and AI community come together we can make faster progress."