“Mom, it’s me! I’ve been in an accident and need money right away!”
The voice on the phone sounds exactly like your child, but it’s actually an artificial intelligence clone created from a three-second clip of his voice on Facebook. Welcome to the frightening new world of AI-powered fraud. Generative artificial intelligence (GenAI) has handed scammers a powerful new toolkit that makes yesterday’s email scams look amateur by comparison.
The sophisticated fraud techniques emerging today are virtually undetectable to the untrained eye, or ear. And the financial impact is staggering. Since 2020, phishing and scam activity has increased by 94%, with millions of new scam pages appearing monthly. Even more alarming, experts estimate losses from AI-powered scams will reach $40 billion in the U.S. by 2027.
A man typing on his laptop. (Kurt “CyberGuy” Knutsson)
What is generative AI and why should you care?
Generative AI refers to so-called artificial intelligence systems that create new content — text, images, audio or video — based on data they’ve been trained on. Unlike traditional AI that analyzes existing information, generative AI produces entirely new, convincing content. The most concerning part? These powerful tools are increasingly accessible to fraudsters who use them to create sophisticated scams that are harder than ever to detect.

A woman working on her laptop. (Kurt “CyberGuy” Knutsson)
BEST ANTIVIRUS FOR MAC, PC, IPHONES AND ANDROIDS — CYBERGUY PICKS
How fraudsters are weaponizing GenAI
Today’s scammers use generative AI to “supercharge” their existing techniques while enabling entirely new types of fraud, according to Dave Schroeder, UW–Madison national security research strategist. Here are the four most dangerous ways they’re using this technology.
Voice cloning: The 3-second threat
With just three seconds of audio, easily obtained from social media, voicemails or videos, fraudsters can create a convincing replica of your voice using AI. “Imagine a situation where a ‘family member’ calls from what appears to be their phone number and says they have been kidnapped,” explains Schroeder. “Victims of these scams have said they were sure it was their family member’s voice.”
These AI-generated voice clones can be used to manipulate loved ones, coworkers or even financial institutions into transferring money or sharing sensitive information, making it increasingly difficult to distinguish between genuine and fraudulent calls.
Fake identification documents
Today’s AI tools can generate convincing fake identification documents with AI-generated images. Criminals use these to verify identity when fraudulently opening accounts or taking over existing ones. These AI-generated fake IDs are becoming increasingly sophisticated, often including realistic holograms and barcodes that can bypass traditional security checks and even fool automated verification systems.
Deepfake selfies
Many financial institutions use selfies for customer verification. However, fraudsters can take images from social media to create deepfakes that bypass these security measures. These AI-generated deepfakes are not limited to still images, they can also produce realistic videos that can fool liveness detection checks during facial recognition processes, posing a significant threat to biometric authentication systems.
Hyper-personalized phishing
Similarly, GenAI now crafts flawlessly written, highly personalized phishing emails that analyze your online presence to create messages specifically tailored to your interests and personal details. These AI-enhanced phishing attempts can also incorporate sophisticated chatbots and improved grammar, making them significantly more convincing and harder to detect than traditional phishing scams.

A man working on his laptop. (Kurt “CyberGuy” Knutsson)
HOW TO PROTECT YOUR DATA FROM IRS SCAMMERS THIS TAX SEASON
Why you might be a prime target
While everyone is at risk from these sophisticated AI scams, certain factors can make you a more attractive target to fraudsters. Those with substantial retirement savings or investments naturally represent more valuable targets — the more assets you have, the more attention you’ll attract from criminals looking for bigger payoffs. Many older adults are particularly vulnerable as they didn’t grow up with today’s technology and may be less familiar with AI’s capabilities. This knowledge gap makes it harder to recognize when AI is being used maliciously. Compounding this risk is an extensive digital footprint: if you’re active on social media or have a significant online presence, you’re inadvertently providing fraudsters with the raw materials they need to create convincing deepfakes and highly personalized scams designed specifically to exploit your trust.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?