What they are and how to recognize them
![](https://www.comparitech.com/wp-content/uploads/2025/01/AI-scams-are-here-–-heres-what-to-look-out-for.jpg)
Unless you’ve been living under a rock, inside a cave, or on another planet, you’re likely aware that there’s a new kid on the tech block: artificial intelligence. AI can do many things, like answer questions and produce pictures, videos, text, and code. AI can potentially supercharge existing scams while opening the door to brand-new ones.
Whether AI scams are entirely new or enhance existing ones, the goal will typically be to get a piece of your private info for financial gain, identity theft, and other crimes. Most AI scams (though not all) have a phishing component to them if they’re not full-on phishing scams.
In this post, we look at how AI is reshaping the digital world’s attack surface by reviewing some of the more prevalent AI scams and providing tips on how to avoid them. But before we jump into that, let’s give a quick overview of phishing scams viewed through an AI lens.
Going phishing (with AI)
Phishing isn’t anything new. People have been masquerading as anyone and their uncle to trick the unsuspecting into providing valuable information since the dawn of time. The internet then offered malicious actors with a massive attack surface to perpetrate phishing scams online.
You’re more than likely to have received a phishing email at one time or another, claiming that personal or financial information is required for your package delivery, for your refund to go through, or for your account to return to working order.
Now, with AI, online phishing got a massive shot in the arm. Old tell-tale signs like poor spelling and “off” branding are no longer sufficient. Generative AI is making these emails a lot cleaner and slicker. The fake website the malicious email link will direct you to will also feel more legitimate and more challenging to identify as fraudulent – and you can thank AI for that.
Be suspicious of online strangers who claim they want to help you. That’s as unlikely as Brad Pitt needing your money to pay for surgery…
Deepfake scams
Deepfakes are very convincing AI-generated videos that depict real people saying and doing things they never said or did. In other words, they’re fake videos designed to mislead you. The term “deepfake” is used to highlight how convincing it can be.
Deepfake AI uses existing video and audio to create a convincing facsimile of a real person. While the tech has some legitimate uses in film and television (i.e., special effects), within the context of AI scams, it’s used to fool you into believing falsehoods. But now that you harbor those beliefs, you may be willing to hand over some financial or other personal information.
If you do, you’ll hand those details over to your attacker.
Deepfake scam example
In late 2023, a video of Taylor Swift saying she had teamed up with kitchenware brand Le Creuset for a giveaway made the rounds on Facebook. All you had to do to enter the giveaway was to provide your personal information and make a ten-dollar payment.
The video was a deepfake. Taylor Swift didn’t partner with Le Creuset, and there was no contest. Oh, and that ten-dollar payment you made? It turns out it’s a recurring monthly charge rather than a one-time fee, but you’re unaware of that.
How to identify deepfake videos
While AI video generation is very good (more than good enough to be convincing), it’s still not perfect. There’s a good chance there will be imperfections and artifacts in the video that can be signs of a deepfake.
- Look for any “strangeness” in the person’s body. This commonly happens with the person’s hands in deepfakes. Look for missing or undefined fingers in certain frames. These kinds of inconsistencies are prevalent in AI-generated videos.
- Keep an eye out for unnatural facial movements. If the deepfake was produced by modifying a legitimate video, it could make things like facial movements somewhat “off.”
- Remember that deepfake videos tend to be shot from a single angle, and the subject usually faces one direction without ever turning.
- Pay close attention to lip movement and synchronization. If either is off, that should raise red flags.
- Check for unnatural lighting or strange artifacts appearing in the background.
- Of course, try and verify the source of the video. Treat it as suspect if you can’t find a trustworthy source.
Voice cloning scams
Voice cloning is a deepfake but for audio. Training an AI model with a person’s voice recordings lets scammers speak in a convincing rendition of another person’s voice, tone, and prosody. It’s uncanny.
At first glance, voice cloning can appear less dangerous than deepfakes because voice cloning lacks the visual support of deepfakes. But the opposite is true. You’re more likely to fall victim to a voice cloning scam than a deepfake scam because it’s voice-only. Our ears are not as good at identifying fake audio as our eyes are at spotting fake video.
AI’s accuracy in replicating someone’s voice is chilling. Victims of voice cloning scams typically state that they never for a second doubted they were speaking to the actual person. That’s how good it is.
Examples of voice cloning
The end game in a voice cloning scam is financial gain. A common scam tactic is to pretend to be one of the victim’s loved ones in a bind and need quick cash. They will typically state there’s a time constraint and that they need the money quickly. That’s a classic tactic to get you to panic and react emotionally, bypassing your rational thought processes.
While there are voice cloning scams that feature celebrity voices, the most common variant is one in which the impersonated individual is someone you know, typically a family member in need of help. The reason for this is pretty simple: you’re much more likely to react emotionally to a loved one in need, as are you more likely to trust them. Also, why would a celebrity need your money? Orchestrating the scam this way makes it more likely to succeed.
Of course, if you send the requested funds, you’ll send them straight to your attacker, who may now have your financial information in their possession.
How to identify voice cloning
There isn’t a list of audio artifacts to look out for to spot a cloned voice, but you can take precautions to minimize your chances of falling victim to a voice cloning scam.
- If you get a call from a loved one asking for money, hang up the phone and call them back. You can be confident you’re speaking with the actual person if you’re the one making the call.
- If you can’t get a hold of them, call someone they’re close to and that you trust to try and assess if the emergency is real. Also note that if the person states something like “I need help, but don’t tell X or Y,” that should raise some red flags.
- If the person asking for money specifically requests that you send them the money using gift cards or that you use a specific online payment platform like Zelle or Apple Pay (which are quasi-instantaneous and irreversible), that should raise red flags.
- Come up with a “safety word” that you share with people you’re closest to and can use to verify each other’s identity in such a situation. Make sure to keep the safety word secret. Don’t even write it down on a piece of paper.
Fake images / fake news
Generative AI has an uncanny ability to create photorealistic fake images. It’s mostly fine when used for artistic purposes (aside from potential copyright issues). However, these images often depict fake events and promote fake news stories.
Instilling false beliefs in people’s minds is bad enough. But this tactic is also used to defraud the unsuspecting. One of the ways this happens is by generating gut-wrenching images of children in need to promote fake charities that ask you for real money.
Examples of fake images / fake news
The above scenario played out in the aftermath of the Turkey-Syria earthquake in 2023. Immediately after the quakes, a bunch of charities you’ve likely never heard of cropped up. Even DIY charities popped up, where it’s simply an individual claiming to be in contact with victims, collecting money for the cause. Flooding social networks with these posts exploits the victim’s sense of community (just a well-meaning user trying to make a difference), and the heartbreaking images accompanying the posts may well seal the deal.
However the request reaches you, the idea is to emote you and bypass your rational thought processes to heighten the chances of you simply donating some money without making the proper verifications.
How to identify fake images / fake news
- Check the source of the content before getting on board. There’s no shortage of folks looking to manipulate you online.
- Verify the story through an established and reliable news outlet. If it appears to be a major story and you can’t find a trusted news outlet to corroborate it, there’s a good chance it’s fake.
- Official charities have registration numbers that can be verified to prove their legitimacy before falling victim to a charity scam. Check those before donating. And while it’s possible that a random individual would honestly want to raise money for a given cause, the odds are very low that you’ll be able to verify where your money went. Also, official charitable donations are usually tax-deductible, but that random Facebook charity likely doesn’t qualify.
- Find a good fact-checking website and use it. Always verify claims or news stories you’re unsure about before espousing a belief or making a financial contribution.
General tips to avoid AI scams
Here are some tips to avoid AI scams:
- It’s a sad state of affairs, but we can no longer trust photographs, video, or audio in today’s digital age. Of course, not every image is fake, but given our current landscape, we have no other choice but to be skeptical of what we see and hear online. A picture may be worth 1000 words, but today, those words could all be lies…
- Limit the amount of personal information that is publicly visible on your social media accounts – you’re making it easier for scammers. Also, go through the privacy settings on your social media accounts and tighten them. And if you think sharing personal information on the internet is frivolous, change your mindset. It’s not.
- Use strong, unique passwords for all your online accounts. Never reuse the same password on multiple accounts. If you do, when one account becomes compromised, all the others using the same password will be compromised, too.
- Be mindful of sharing your data with third-party apps. Only share your data with people, apps, and services you trust.
General tips to keep you and your accounts safe online
The tips below always apply, and you should follow them regardless of whether you want to avoid one particular threat or them all.
- Be conservative with your PII online. Don’t sign up for everything. Don’t hand out your details to every site you encounter. Only share your information with sites and services you trust.
- Use a burner email for frivolous services. You can easily find email alias services that allow you to use burner addresses to sign up for online services. That makes your email much less likely to be compromised (and will also limit spam).
- Don’t open attachments in emails unless you know who the sender is and you’ve confirmed with that person that they really did send you that email. You should also ensure they know the email contains an attachment and understand what the attachment is.
- Don’t click links (URLs) in emails unless you can confirm who sent you the link and its destination. Contacting the sender through another channel (not email) might also be good to ensure the sender is not impersonated. Also, check the link for incorrect spelling (faceboook instead of facebook or goggle instead of google)? If you can reach the destination without using the link, do that instead.
- Use a firewall. All major operating systems have built-in incoming firewalls, and all commercial routers on the market provide a built-in NAT firewall. Enable both. You’ll thank me if you click a malicious link.
- Use an antivirus program – Only purchase genuine and well-reviewed antivirus software from legitimate vendors. Keep your antivirus updated and set it up to run frequent scans and real-time monitoring.
- Keep your operating system updated – You want the latest OS updates. They contain the latest security patches that will fix any known vulnerabilities. Make sure you install them as soon as they’re available.
- Never click on pop-ups. Ever. Pop-ups are just bad news—you never know where they will lead you.
- Don’t give in to “warning fatigue” if your browser displays yet another warning about a website. Web browsers are becoming more secure daily, which tends to raise the number of security prompts they display. Still, you should take those warnings seriously. So, if your browser displays a security prompt about a URL you’re attempting to visit, pay attention to your browser’s warning and get your information elsewhere. That’s especially true if you click a link you received by email or SMS – it could send you to a malicious site. Do not disregard your computer’s warning prompts.
AI scams – Wrapping Up
So, that was a look at the most common AI scams. While the advent of AI allowed for the development of some new scams, its biggest contribution has been to lower the barrier of entry for scammers – though some of the means to manipulate people are novel.
AI makes perpetrating online scams much easier and more convincing. And because generative AI has been trained in so many domains (including coding and software development), it reduces the amount of technical skill required to pull off the scam.
Hopefully, the examples and tips in this post will help you avoid them.
As always, stay safe.
Other AI articles:
Source link