IRONSCALES released its latest threat report last week – Deepfakes: Assessing Organizational Readiness in the Face of This Emerging Cyber Threat. We wrote earlier this year about the emergence of deepfake meeting scams, so this threat report is topical and timely.
Key stats and ideas from the report:
- 94% of survey respondents have some level of concern about the security implications of deepfakes.
- The increasing sophistication of deepfake technologies has left many people struggling to differentiate artificially generated content from reality.
- The worst of what deepfake-enabled threats has to offer is still yet to come. 64% of respondents believe the volume of these attacks will increase in the next 12-18 months.
- 53% of respondents say that email is an “extreme threat” as a channel for deepfake attacks.
Our POV:
- 94% said they had concern and about deepfakes, and so they should. We think that 100% of respondents should have been concerned. It is still very early days for the weaponization of deepfake technology, and the various ways in which this will be used by threat actors for malicious ends remains to be seen. As an industry, we don’t have a good enough grasp of the full picture yet, such as whether deepfake threats are just audio and video, whether they originate in email or whether they are subsequent attack methods in a multi-stage coordinated targeted attack, and so on.
- Deepfakes – especially of the live audio and video kind – are a uniquely AI-enabled cyberthreat. This will demand AI-powered cybersecurity solutions to detect and respond.
- As an industry, we’ve talked about impersonation as a threat for a long time, often in the context of vendor impersonation (for business email compromise) or domain impersonation (for phishing attacks in general). Deepfakes is several next levels up on the impersonation side. We’ll need to be careful re language though, to differentiate different types of attacks and by implication different types of approaches for detecting and stopping such attacks. It doesn’t make a lot of sense for everything that’s fake to become a “deepfake.”
And just a reminder: IRONSCALES is a client at Osterman Research. We’ve had the privilege of working with IRONSCALES on multiple research projects in recent years. We didn’t, however, have any participation in the formulation, execution, or delivery of this research.
Leave a Reply