deepfake – Osterman Research https://ostermanresearch.com Insightful research that impacts organizations Sun, 09 Mar 2025 21:23:05 +0000 en-US hourly 1 https://i0.wp.com/ostermanresearch.com/wp-content/uploads/2021/01/cropped-or-site-icon.png?fit=32%2C32&ssl=1 deepfake – Osterman Research https://ostermanresearch.com 32 32 187703764 Some thoughts on the new Ironscales report on deepfakes https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/ https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/#respond Thu, 10 Oct 2024 19:31:49 +0000 https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/ IRONSCALES released its latest threat report last week – Deepfakes: Assessing Organizational Readiness in the Face of This Emerging Cyber Threat. We wrote earlier this year about the emergence of deepfake meeting scams, so this threat report is topical and timely.

Key stats and ideas from the report:

  • 94% of survey respondents have some level of concern about the security implications of deepfakes.
  • The increasing sophistication of deepfake technologies has left many people struggling to differentiate artificially generated content from reality.
  • The worst of what deepfake-enabled threats has to offer is still yet to come. 64% of respondents believe the volume of these attacks will increase in the next 12-18 months.
  • 53% of respondents say that email is an “extreme threat” as a channel for deepfake attacks.

Our POV:

  • 94% said they had concern and about deepfakes, and so they should. We think that 100% of respondents should have been concerned. It is still very early days for the weaponization of deepfake technology, and the various ways in which this will be used by threat actors for malicious ends remains to be seen. As an industry, we don’t have a good enough grasp of the full picture yet, such as whether deepfake threats are just audio and video, whether they originate in email or whether they are subsequent attack methods in a multi-stage coordinated targeted attack, and so on.
  • Deepfakes – especially of the live audio and video kind – are a uniquely AI-enabled cyberthreat. This will demand AI-powered cybersecurity solutions to detect and respond.
  • As an industry, we’ve talked about impersonation as a threat for a long time, often in the context of vendor impersonation (for business email compromise) or domain impersonation (for phishing attacks in general). Deepfakes is several next levels up on the impersonation side. We’ll need to be careful re language though, to differentiate different types of attacks and by implication different types of approaches for detecting and stopping such attacks. It doesn’t make a lot of sense for everything that’s fake to become a “deepfake.”

And just a reminder: IRONSCALES is a client at Osterman Research. We’ve had the privilege of working with IRONSCALES on multiple research projects in recent years. We didn’t, however, have any participation in the formulation, execution, or delivery of this research.

]]>
https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/feed/ 0 4624
And so it begins … the deepfake meeting scams https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/ https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/#respond Fri, 09 Feb 2024 18:37:48 +0000 https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/ The New Zealand Herald covered the story of a deepfake meeting scam attempt against Zuru in November 2023, which [1] featured a deepfake of the CEO attempting to get the CFO to transfer money, but [2] was less than optimal since while the deepfake video presented a perfect rendition of the CEO, the “AI wasn’t sophisticated enough for a real-time voice exchange.” The deepfake CEO reverted to a text exchange (by the sounds of it, either a chat session during the Teams meeting or a WhatsApp message exchange), but since the language used during that exchange deviated from the language patterns of the actual CEO, the CFO saw through the fraud attempt.

We’ve come a long way in three months, apparently, since a successful and costly incident happened a couple of weeks back that seamlessly merged video and voice of multiple deepfakes in an online meeting meeting to trick a finance employee into transferring a large sum of money. This happened at the Hong Kong office of an unnamed multinational company, resulted in losses of US$25.6 million, and saw the scammers “convincingly replicat[ing] the appearances and voices of targeted individuals using publicly available video and audio footage.”

A couple of thoughts on the above:

  1. There is speculation in the comments section of the ArsTechnica article that the finance employee in Hong Kong was complicit. Yes, that’s possible, but voicing such speculations is fraught with danger, because irrespective of whether it proves to be true or false, such actions have smeared many an individual and resulted in some taking their own life out of a sense of public shaming. If the Hong Kong employee was duped, he or she should be supported, not shamed. It points to a significant area of weakness in organizational processes and systems that the multinational company will need to address, along with everyone else.
  2. Requests for secret transfers of money to new bank accounts should be an immediate red flag, irrespective of the person asking for this to happen. For any organization that doesn’t have a policy on this type of request, along a strong authorization process that applies in such cases, fraud and other types of questionable behavior will only continue to succeed.
  3. From a tech perspective, this highlights the need for using authorized apps only, enforcing strong identity security controls, and recording and archiving online meeting content for subsequent review.
]]>
https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/feed/ 0 4586