Fueled by Generative AI, social engineering attacks are evolving at an alarming rate. Now what was once unthinkable—realistic impersonations, fake voices, and perfectly crafted phishing emails—is not only possible, it’s happening with frightening regularity and success.
Today, bad actors are stealing millions from businesses and individuals alike, which explains why Cybersecurity Ventures predicts that global cybercrime costs will grow by 15% annually over the next five years, reaching $10.5 trillion by 2025. While this figure may be staggering, the warning signs are there:
- According to a recent Crowdstrike report, there was a drastic 442% increase in social engineering tactics aimed at stealing credentials over the second half of 2024.
- Academic research and a 2024 study found that the phishing click rate for AI-generated phishing emails was significantly higher (54%) than for human-written phishing messages (12%).
The Gist:
In our Behind the Breach series, we have touched on several types of social engineering attacks, including the surge in deepfake videos. If these incidents alone weren’t enough, there are many more coming. One that’s gaining traction is voice cloning.
The Latest
As with deepfakes, fraudsters are tapping Gen AI to create cloned voices that are virtually indistinguishable from the person it’s purporting to be. Dubbed digital doppelgängers by some, voice clones are far easier and faster to generate than a deepfake video, with convincing audio being created in mere seconds. Here’s how it works:
- The fraudsters gather sound bites from social media sites, such as Facebook, TikTok, Instagram, YouTube, and other sources.
- Next, they use AI tools to analyze and learn speech patterns, tone, and inflections, as well as machine learning algorithms to generate synthetic speech mimicking the original voice
- From there, they create highly realistic voice clones that are nearly indistinguishable from the original
One example of a voice clone attack occurred in 2024 when attackers targeted a UK-based energy firm. Impersonating the company’s CEO, attackers placed a call to a senior executive, urgently instructing them to transfer €220,000 to a Hungarian supplier. The voice, which referred to the request as a critical business transaction, successfully convinced the executive to transfer the funds.
Trustmi’s Take:
AI-driven threats like voice cloning are evolving too quickly for traditional defenses to keep pace. Trustmi believes the best way to combat these sophisticated scams is by using AI-powered solutions that focus on protecting what attackers are truly after—your company’s money.
By analyzing payment behaviors and adapting to emerging threats in real time, Trustmi’s Behavioral AI security solutions provide proactive fraud prevention tailored to finance teams and executives.
Ready to see how our solutions can safeguard your organization? Request a demo today!