Researchers from artificial intelligence startup Sensetime have created the “largest” benchmark for deepfake detectors, allowing developers to train and test systems that attempt to identify face forgeries.

Why it matters: Used to manipulate media using artificial intelligence to create realistic-looking videos, images, or sounds, deepfake technology has sparked concerns that applications including facial recognition systems could be fooled, leading to compromised personal data. In popular cases, celebrities’ faces have been superimposed on bodies that are not their own.

  • China has enacted rules that require online platforms to clearly mark content that has been created using deepfakes, deep learning, virtual reality, or other technologies.

“The popularization of ‘deepfakes’ on the internet has further set off alarm bells among the general public and authorities, in view of the conceivable perilous implications. Accordingly, there is a dire need for countermeasures to be in place promptly, particularly innovations that can effectively detect videos that have been manipulated.”

—Researchers from Sensetime and Nanyang Technological University

China targets ‘deepfake’ content with new regulation

Details: Sensetime Research created the benchmark, dubbed Deeper-Forensics-1.0, along with Singapore’s Nanyang Technological University. The researchers claim the dataset is 10 times larger than others of its kind, consisting of 60,000 videos made up of 17.6 million frames.

  • Sensetime said that all source videos were carefully collected, and fake videos were generated with a newly proposed face-swapping framework.
  • The researchers collected face data from 100 paid actors from 26 countries, 53 of which were male and 47 female, to build DeeperForensics-1.0. The actors fell between the ages and 20 and 45, which is the most common age group that appears in real-world videos, Sensetime said.
  • The researchers added that the videos in the dataset are “more realistic” than others that exist. They also included footage at different rates of compression and blur in order to mimic real-world scenarios.
  • The benchmark also includes a hidden test set, which contains manipulated videos that were able to trick human evaluators, they said.

Context: Deepfakes gained widespread attention in China in September when popular face-swapping platform Zao was thrust into the spotlight over privacy issues. Released on August 31, the app quickly went viral in China before its policies allowing excessive data collection were publicized. Zao quickly became a sensation, with its servers hitting maximum capacity on the day of its launch.

  • Regulators took notice, summoning executives from its parent company, dating platform Momo, to discuss Zao’s data collection practices.

Chris Udemans

Christopher Udemans is a Shanghai-based data and graphics reporter. He covers Chinese artificial intelligence, mobility, and cybersecurity. You can contact him at chrisudemans [at] technode [dot] com.

Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.