China’s biggest deepfake scam to date has led to warnings of a rise in fraud cases using AI tools such as face-swapping and voice mimicking. In the widely-discussed case, a Fuzhou tech firm’s legal representative was allegedly defrauded of RMB 4.3 million ($610,000), after receiving a video call from a “friend,” who turned out to be a fraudster using AI face-swapping technology. The case became a hot topic on social media after police confirmed its details, highlighting how AI can be used to con well-educated adults within minutes. 

Why it matters: AI regulation is still a developing subject in China. In mid-April, the country’s internet regulator issued a draft regulation on the use of generative AI and sought public feedback on the proposed measures. Initial excitement around the potential of ChatGPT and similar AI products in China has given way to concerns over how AI could be used to supercharge criminal activity.

Details: According to disclosures by police in the eastern Chinese city of Fuzhou, on April 20, a fraudster stole an individual’s WeChat account and used it to make a video call to a businessman, an existing contact on the individual’s WeChat app. They used AI to deepfake the individual’s face and told the businessman they needed to make a bank transfer. The businessman subsequently transferred RMB 4.3 million to the fake friend’s bank account without verifying their true identity.

  • After the fraud victim alerted the authorities, Fuzhou and Baotou police helped intercept some of the stolen funds, however multiple media outlets have reported that around RMB 1 million is yet to be recovered and that the case is thought to be biggest such scam to date. The police’s investigations are ongoing. 
  • The fraudster utilized tools that could steal audio visual information to generate  convincing AI voice and image material, police said. 
  • The fraud sparked widespread discussion on Chinese social media. On Tuesday, the trending hashtag #AI crime overwhelms the country#, which had a total of 180 million views, was seemingly removed from the social media platform Weibo amid fears that the case may inspire copycat crimes.
  • China Youth Net, a Communist Youth League of China-backed media outlet, was among those to later post a warning to the public about the dangers of AI scams.
  • Face-swapping technology has also been used by online livestreamers to produce deepfakes of popular celebrities, according to local media outlet China Economic Network, raising related issues around fraud and intellectual property rights.

Context: The global buzz surrounding the launch of ChatGPT has seen a spate of AI-related product launches in China, with the country’s tech majors rushing to prove they can offer similar technology. However, the Fuzhou fraud case has combined with other high profile deepfake incidents to remind people of the potential downsides to such advances in artificial intelligence.

  • In a much-reported incident that is testing the boundaries of China’s copyright laws, famous Mandopop singer Stephanie Sun has seen an AI version of her voice used to produce new covers of popular songs in recent weeks. “AI Stephanie Sun” has nearly a thousand videos on Chinese video-sharing platform Bilibili, with the Singaporean star’s voice being used on everything from folk songs and nursery rhymes to anime theme tunes. Some covers, like ‘Rainy Day’ and ‘Hair Like Snow‘, have garnered over a million hits.
  • On Monday, Sun responded on Chinese social media by asking fans to stay true to themselves and recognize the futility of “arguing with someone who releases an album every minute.” AI “poses a threat to thousands of jobs, such as the legal, medical, accounting, and other industries, as well as the one we are currently discussing, singing,” Sun added.

Jessie Wu is a tech reporter based in Shanghai. She covers consumer electronics, semiconductor, and the gaming industry for TechNode. Connect with her via e-mail: jessie.wu@technode.com.