The Gist:
Cybersecurity company Wiz was targeted by cybercriminals leveraging deepfake technology. While speaking at TechCrunch Disrupt, Wiz co-founder and CEO Assaf Rappaport revealed that dozens of Wiz employees received voice messages from him asking for their credentials.
Although dangerously close, Wiz employees were not fooled by this attempt, as the voice used in the deepfake attack was not the day-to-day voice that Rappaport’s employees were familiar with, tipping them off to the attack.
The Latest:
Wiz employees were fortunate to have such intimate knowledge of Rappaport’s speech. A sophisticated deepfake attack using audio and even visuals can be indistinguishable from the real person. This was the case when fraudsters used a deepfake video call posing as a finance worker’s CFO to scam the multinational out of $25 million.
Generative AI technologies, like deepfakes, enable cybercriminals to:
- Generate more sophisticated attacks: AI tools can scour the internet, collecting relevant information on the target while generating fraudulent media, such as pictures and videos, to make the attack seem like a completely legitimate communication.
- Launch attacks more frequently: AI can be leveraged to autonomously launch attacks, meaning that cybercriminals have the potential to inflict more damage with significantly less input.
- Lower the barrier to entry: Given that these technologies are becoming increasingly sophisticated and capable in whatever role they are assigned to, run-of-the-mill fraudsters can utilize AI to enhance their capabilities, becoming a true threat to the security of an organization.
- Bypass traditional security solutions: These new and evolving attack methods enable cybercriminals to slip by traditional, siloed security solutions.
Trustmi’s Take:
The recent deepfake attack on Wiz highlights the evolving landscape of Socially Engineered threats. As AI technologies become more sophisticated, organizations face unprecedented challenges in distinguishing legitimate communications from fraudulent ones.
To effectively counter these threats, it’s crucial to adopt a comprehensive security strategy that integrates Behavioral AI. This approach not only enhances detection capabilities by identifying anomalies across the network but also provides end-to-end visibility into the payment process.
By leveraging such solutions, organizations can ensure that their security measures keep pace with the rapidly evolving nature of socially engineered attacks, protecting their people, their money, and restoring trust where it matters most.
Want to know more about AI-driven impersonation attacks? Check out this blog to learn more.