Deepfake-Enabled Fraud Has Already Caused $200 Million in Losses in 2025
Financial losses from deepfake-enabled fraud exceeded $200 million during the first quarter of 2025, according to Resemble AI’s Q1 2025 Deepfake Incident Report, which was released on Thursday.
The report suggests an “alarming” escalation and growing sophistication of deepfake-enabled attacks around the world. While 41% of targets for impersonation are public figures — primarily politicians followed by celebrities — the threat isn’t limited to these groups, as another 34% of targets are private citizens, the findings suggest. The top four uses, according to the report, were non-consensual explicit content, scams and fraud, political manipulation and misinformation.
As to activity by geographical areas, the report found that the highest number of incidents during the first quarter occurred in North America (38%), particularly for political figures and celebrities. That was followed by Asia (27%) and Europe (21%). However, the data also revealed that 63% of records involved “significant cross-border elements.”
According to the report, deepfake use is now led by video (46%), followed by images (32%) and audio (22%). The report finds that voice cloning now requires just three to five seconds of sample audio to create a convincing voice. In facial manipulation, 68% of deepfakes are now “nearly indistinguishable from genuine media.” A combination of these to create synchronized impersonations have now reached 33% of cases, according to the report, which also found that evasion techniques may enable security to be bypassed.
“The report emphasizes the urgent need for a multi-faceted response to the deepfake threat,” said Resemble AI in the announcement of the report. “This includes technical solutions such as increased investment in deepfake detection technologies, standardized watermarking protocols and content authentication mechanisms. Harmonized legislation across jurisdictions is urgently needed to define harmful deepfakes, establish liability for platforms and create effective enforcement mechanisms. Public resilience must be enhanced through expanded media literacy programs, accessible victim reporting mechanisms and comprehensive support systems. Finally, international cooperation is crucial to address the transnational nature of deepfake incidents through cross-border collaboration.”