AI deepfakes are making fraud even more sophisticated. Learn how to stay safe from deepfake scams.
Scammers are using deepfake technology to replicate your child's voice in a kidnapping hoax, catfish with AI-generated video dates, and impersonate executives to steal millions. Learn how to spot deepfake fraud, and use Avast Scam Guardian to help verify what's real before it's too late.
Deepfake technology started as a fun party trick: put your face in a Marvel movie, swap voices for laughs, or have the Pope model a Balenciaga jacket.
But today, it’s helping cybercriminals make their schemes even more convincing. In fact, deepfake fraud in the U.S. skyrocketed by 700% in the first few months of 2025, according to Sumsub.
And the risk goes further than you might expect. People can use it to impersonate your loved ones or even create a whole new “person” from scratch for the sole purpose of scamming you. Let’s break down how scammers are using deepfake AI to fool your eyes, ears, and wallet.
Fake faces and real problems
Deepfakes are synthetic media made using AI, and they’re turning fraud into a full-blown art form. Criminals are using AI deepfake tech to build synthetic identities using AI-generated images, voices, and partial real data (like a stolen SSN or address). They then use these fake personas to apply for jobs, open bank accounts, or trick unsuspecting users into sending money.
Case in point: North Korean IT workers were caught using deepfake videos to apply for remote U.S. tech jobs in 2023. This way, they could funnel cash and intellectual property to the regime.
Worse, the threat is growing. According to Sumsub, synthetic identity document fraud surged 300% in the U.S. in the early months of 2025.
Imposter in the C-suite
Imagine your company’s CFO sends you a message saying there’s an urgent supplier payment. You check the video message. It’s clearly them. The voice, the face — it all checks out.
Except… it’s not real.
Well, something similar happened to Ferrari earlier this year. An executive received a WhatsApp voice message impersonating the Ferrari CEO. Thankfully, the executive figured out it was a scam before it was too late.
Hollywood-style deception
Celebs aren’t safe from deepfake fraud, either. Scammers are turning them into unwilling spokespeople for scams using videos and AI-generated voices.
These videos rack up millions of views before they’re flagged. Meanwhile, victims trust familiar faces, and scammers bank on that.
A new (fake) face at the family reunion
A cruel twist on the classic "grandchild scam," fraudsters are using deepfake technology to mimic family members in distress. This happened to one Florida woman who received a call that sounded exactly like her daughter, claiming she had been involved in a car accident. A supposed lawyer then demanded $15,000 in cash for bail.
To pull off this scheme, scammers use short voice clips (easily scraped from Instagram, TikTok, or YouTube) to clone voices with frightening accuracy.
Artificial love at first sight
Deepfakes take catfishing and other romance scams to the next level, with criminals creating fake profiles using AI-generated images and voices. Some even clone the voices of celebrities or influencers. Victims believe they’ve met someone special… until the money requests start.
For example, a Los Angeles woman sent over $80,000 to a scammer pretending to be "General Hospital" star Steve Burton to help pay for a home she believed they would share together.
What happens if I fall for one of these deepfake scams?
You’re not alone. And, you’re not foolish. This tech is designed to trick even the most tech-savvy people. That’s why AI scam protection more important than ever.
Here’s what’s at stake:
- Financial loss: These scams can cost anywhere from a few hundred dollars to your life savings.
- Identity compromised: Your personal info might be out there for future fraud and continued risks of identity theft.
- Emotional toll: Embarrassment, anxiety, and distrust are common feelings after a scam. Romance or family scams tend to hit particularly hard.
What should I do if I’m talking to a deepfake?
The second you suspect something’s off, stop engaging and take immediate action. Here’s what to do next:
- Block the scammer: Cut off all communication immediately.
- Contact affected companies: Banks, employers, or any service that might’ve been compromised need to be notified.
- File a police report: Police reports help track scammers and might help recover losses.
Report to federal agencies: File a complaint with the FTC and IC3.
How to stay safe from deepfaked faces and voices
You may not be able to avoid these attacks, but you can help protect against them. Here are a few ways to outsmart the bots:
- Look for the signs: Robotic or overly smooth voices, weird eye movement, and glitchy video quality should raise suspicion.
- Ask “Is this likely?”: Would Scarlett Johansson really FaceTime you about a crypto app?
- Be wary of weird requests: Asking for money, passwords, or any urgent action is a red flag.
- Use multi-factor authentication: MFA is especially important for email and banking.
- Have a family “safe word”: Choose a secret phrase only real loved ones would know.
Don’t get (deep)faked out
The scammers may have AI, but so do we. Avast Scam Guardian in Avast Free Antivirus uses AI-powered threat detection to flag and block malicious links, suspicious audio/video sources, and social engineering attempts before they get to you.
FAQs
How common are deepfakes?
Deepfakes are becoming very common, with 60% of consumers having encountered one in the past year.
Can deepfakes be prosecuted?
Whether deepfakes can be prosecuted often depends on the scenario. Offenders can be charged with fraud, identity theft, or defamation, though global laws are still evolving.
Does Google block deepfakes?
Google will partially block deepfakes, and recently announced that it’s taking steps to remove explicit deepfakes from search results. YouTube and Google’s ad platforms ban deepfakes meant to mislead, but detection tech is still catching up.
How many deepfake losses have occurred in 2025?
According to the World Economic Forum, deepfake fraud globally caused more than $200 million in financial losses in just the first quarter of 2025.