ai – Osterman Research https://ostermanresearch.com Insightful research that impacts organizations Mon, 10 Mar 2025 22:27:58 +0000 en-US hourly 1 https://i0.wp.com/ostermanresearch.com/wp-content/uploads/2021/01/cropped-or-site-icon.png?fit=32%2C32&ssl=1 ai – Osterman Research https://ostermanresearch.com 32 32 187703764 Using AI to Enhance Defensive Cybersecurity – our latest report https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/ https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/#respond Thu, 21 Nov 2024 23:51:07 +0000 https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/ For every topic, key enemies are hype and bluster. Hype is overinflated expectations or advocacy for something that can’t live up to what is said about it. Bluster is the aggressive and noisy positioning of something without the depth of character or capability to follow through. As a researcher, breaking through hype and disabusing bluster are core to our work.

If you’ve read any of our reports – and there’s quite a collection of them across a wide range of topics – you’ll notice that [1] they aren’t short, and [2] we try to dig into the details. Our latest report is no exception … with a hype-busting and bluster-disabusing examination into the role of AI in enhancing defensive cybersecurity. You can get a copy from our portfolio.

To gather the data, we surveyed organizations in the United States on the front lines of cybersecurity attacks. To take the survey, the respondent had to work at an organization with at least 500 employees and/or at least 50 people on their security team. We wanted to get a sense of what they were seeing in terms of changing dynamics with cybersecurity attacks, particularly the impact of offensive AI. And equally, we wanted to get a read on how they were responding to these changing attack dynamics.

We reached four key conclusions in the research:

  • Attackers have the early advantage in generative AI and GANs
    Generative AI and GANs are tipping the scales in favor of attackers, but defensive AI tools are catching up, especially in behavioral AI and supervised machine learning.
  • Integrate AI strategically into cybersecurity frameworks. Strategic integration of AI into cybersecurity frameworks is essential to fully
    leverage the technology’s potential. Organizations should focus on aligning AI investments with core business objectives and risk management practices.
  • AI is a force multiplier for cybersecurity teams. AI enables cybersecurity teams to focus on high-impact activities. However, this requires appropriate training, organizational alignment, and investment in the right tools.
  • The time for embracing AI in defensive cybersecurity is now. As AI reshapes both offensive and defensive cybersecurity, organizations must act swiftly to secure their infrastructures, adopt AI-powered defenses, and prepare their teams for the next generation of AI-enabled threats.

Do these conclusions echo what you’re seeing at your organization? Get your copy of the report if so.

This research was sponsored by Abnormal Security, IRONSCALES, and OpenText.

If your firm provides AI-powered cybersecurity solutions to offer protections against AI-enabled attacks AND you would like to spread this research to your customers and prospects, please get in contact to talk about licensing options.

]]>
https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/feed/ 0 4626
Some thoughts on the new Ironscales report on deepfakes https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/ https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/#respond Thu, 10 Oct 2024 19:31:49 +0000 https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/ IRONSCALES released its latest threat report last week – Deepfakes: Assessing Organizational Readiness in the Face of This Emerging Cyber Threat. We wrote earlier this year about the emergence of deepfake meeting scams, so this threat report is topical and timely.

Key stats and ideas from the report:

  • 94% of survey respondents have some level of concern about the security implications of deepfakes.
  • The increasing sophistication of deepfake technologies has left many people struggling to differentiate artificially generated content from reality.
  • The worst of what deepfake-enabled threats has to offer is still yet to come. 64% of respondents believe the volume of these attacks will increase in the next 12-18 months.
  • 53% of respondents say that email is an “extreme threat” as a channel for deepfake attacks.

Our POV:

  • 94% said they had concern and about deepfakes, and so they should. We think that 100% of respondents should have been concerned. It is still very early days for the weaponization of deepfake technology, and the various ways in which this will be used by threat actors for malicious ends remains to be seen. As an industry, we don’t have a good enough grasp of the full picture yet, such as whether deepfake threats are just audio and video, whether they originate in email or whether they are subsequent attack methods in a multi-stage coordinated targeted attack, and so on.
  • Deepfakes – especially of the live audio and video kind – are a uniquely AI-enabled cyberthreat. This will demand AI-powered cybersecurity solutions to detect and respond.
  • As an industry, we’ve talked about impersonation as a threat for a long time, often in the context of vendor impersonation (for business email compromise) or domain impersonation (for phishing attacks in general). Deepfakes is several next levels up on the impersonation side. We’ll need to be careful re language though, to differentiate different types of attacks and by implication different types of approaches for detecting and stopping such attacks. It doesn’t make a lot of sense for everything that’s fake to become a “deepfake.”

And just a reminder: IRONSCALES is a client at Osterman Research. We’ve had the privilege of working with IRONSCALES on multiple research projects in recent years. We didn’t, however, have any participation in the formulation, execution, or delivery of this research.

]]>
https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/feed/ 0 4624
Making the SOC More Efficient https://ostermanresearch.com/2024/10/09/making-the-soc-more-efficient/ https://ostermanresearch.com/2024/10/09/making-the-soc-more-efficient/#respond Tue, 08 Oct 2024 18:29:33 +0000 https://ostermanresearch.com/2024/10/09/making-the-soc-more-efficient/ Setting the research agenda at Osterman Research is a never-ending process of looking at possibilities, gathering early intel on the importance of each topic, and filtering a larger list to focus on the critical topics that can move-the-needle for cybersecurity at organizations. Many projects that end up on our agenda come about naturally from our ongoing wider research programs. Some, however, are suggested to us.

Our latest research agenda program fits in the latter category. When we were looking at possibilities for 2024, a client suggested:

Something around how the security industry is evolving to make the SOC more efficient and reduce stress and burnout would be good. For example, the H/M/L prioritization of alerts didn’t really do much. What are vendors doing that works, and what doesn’t work? (There could be a little AI in here, but it would be good to go beyond that.)

That nudge (thanks, Bob!) became the origin point for our latest report, Making the SOC More Efficient (available on the main Osterman Research site). It’s a long paper (26 pages) that attempts to deal thoughtfully and in-depth with the topic, exploring the data points we captured through the survey and advocating a way forward. There is more than “a little AI” in the report, though, as this has become both the greatest threat (82.4% of security leaders said that “the use of AI by cyberthreat actors in cyberattacks” was “very impactful” or “extremely impactful” – the highest-rated trend in this research) and one of the greatest tools for defenders (via the rise of AI-enabled cybersecurity solutions).

Some of the key takeaways from the research:

  • Current SOC approaches have hit the wall
    Confidence in the ability of the SOC to protect against the threats detected by their security tools has dramatically increased during the past two years, but this increase in confidence is expected to rapidly crater. The innovations that drove increased SOC performance over the past two years do not contain the necessary ingredients to continue driving performance over the next two.
  • Specialized threat intelligence to eliminate false positives, AI for behavioral analysis, and autonomous remediation seen as top innovations
    The three innovations seen as most likely to drive SOC efficiency and reduce stress and burnout among SOC analysts are the use of specialized threat intelligence to eliminate false positives; using AI for behavioral analysis in investigating alerts and autonomously creating or updating detection rules; and autonomously remediating incidents without SOC analyst intervention. Almost half of respondents gave two AI-powered defensive innovations the highest rating.
  • New innovations improve SOC metrics by a composite average of 35%
    All organizations in this research are already experimenting with at least one new approach to improving the efficiency of their SOC. The most impactful innovations on key SOC metrics (time to begin working on an issue, time to close an incident, and number of false positives) are AI behavior analysis with autonomous rule creation/updating, AI behavioral modeling for detecting baseline deviations, and autonomous remediation of incidents.

If SOC efficiency is in your wheelhouse, we’d love you to get a copy.

This program was sponsored by Dropzone AIHYAS InfosecRadiant Security, and Sevco Security.

]]>
https://ostermanresearch.com/2024/10/09/making-the-soc-more-efficient/feed/ 0 4629
Cybersecurity Perspectives 2024: Enterprises Race to Defend Against Accelerated Pace of Emerging Threats https://ostermanresearch.com/2024/05/24/scalevp-perspectives-2024/ https://ostermanresearch.com/2024/05/24/scalevp-perspectives-2024/#respond Thu, 23 May 2024 22:38:29 +0000 https://ostermanresearch.com/2024/05/24/scalevp-perspectives-2024/ Osterman Research announces the publication of a new white paper – Cybersecurity Perspectives 2024: Enterprises Race to Defend Against Accelerated Pace of Emerging Threats. This white paper was commissioned by Scale Venture Partners. 

This is the eleventh year that Scale has produced this research (in collaboration with Everclear Marketing, we’ve helped over the past three years). The survey and report look at evolving threats and solutions, investment priorities for cybersecurity technologies and strategies (make sure you see the top 10 chart for this year and the changes from last year), and funding and buying patterns. The data is from senior-level decision-makers at organizations with 500 or more employees. AI has an increasing focus in this year’s research – as you would expect. 

Key findings:

  • Data breaches increased, led by phishing and third-party attacks.
  • CISOs prioritised cloud infrastructure and data center security.
  • Attackers targeted AI models while security played catch up.
  • Security budget growth showed signs of slowing.
  • Market gaps found in software supply chain security and ADX. 

For details on how to get yourself a copy, please check out our portfolio

]]>
https://ostermanresearch.com/2024/05/24/scalevp-perspectives-2024/feed/ 0 4603
Some thoughts on AvePoint’s AI and Information Management Report 2024 https://ostermanresearch.com/2024/04/24/avepoint-ai/ https://ostermanresearch.com/2024/04/24/avepoint-ai/#respond Wed, 24 Apr 2024 03:48:47 +0000 https://ostermanresearch.com/2024/04/24/avepoint-ai/ AvePoint recently published its inaugural AI and Information Management Report: The Data Problem That’s Stalling AI Success (no registration required). The report is based on a survey of 762 respondents across 16 countries. AvePoint, AIIM, and CIPL were involved in the production of the report.

Takeaways of note:

  • AI success starts with data success
    The core assertion of the report is that if an organization’s data isn’t ready for training AI models (data must be clean, organized, and accessible), then AI solutions will be hampered in delivering business results. AI solutions will repeat – or amplify – insufficient data, poor logic, and give bad outputs.
  • Data success requires information management disciplines
    Organizations already doing well with IM disciplines will be more likely to see early and sustained success with AI investments. Those without mature IM disciplines would be better to address shortcomings here before jumping prematurely on the AI bandwagon.
  • Differentiate between short-term and long-term success of AI
    Short-term success is productivity, efficiency, and other quantifiable metrics. Long-term success requires assessing data quality over time that can be used to train AI models.
  • Employee training is essential
    Employees need training and upskilling on how to use AI in their jobs, plus how to recognize when AI solutions are producing poor quality outputs. The report suggests that the training budget for AI should be around 40% of the total AI budget.

AvePoint offers data points from the survey along with recommendations for organizations looking to embark on their AI journey.

]]>
https://ostermanresearch.com/2024/04/24/avepoint-ai/feed/ 0 4594
And so it begins … the deepfake meeting scams https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/ https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/#respond Fri, 09 Feb 2024 18:37:48 +0000 https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/ The New Zealand Herald covered the story of a deepfake meeting scam attempt against Zuru in November 2023, which [1] featured a deepfake of the CEO attempting to get the CFO to transfer money, but [2] was less than optimal since while the deepfake video presented a perfect rendition of the CEO, the “AI wasn’t sophisticated enough for a real-time voice exchange.” The deepfake CEO reverted to a text exchange (by the sounds of it, either a chat session during the Teams meeting or a WhatsApp message exchange), but since the language used during that exchange deviated from the language patterns of the actual CEO, the CFO saw through the fraud attempt.

We’ve come a long way in three months, apparently, since a successful and costly incident happened a couple of weeks back that seamlessly merged video and voice of multiple deepfakes in an online meeting meeting to trick a finance employee into transferring a large sum of money. This happened at the Hong Kong office of an unnamed multinational company, resulted in losses of US$25.6 million, and saw the scammers “convincingly replicat[ing] the appearances and voices of targeted individuals using publicly available video and audio footage.”

A couple of thoughts on the above:

  1. There is speculation in the comments section of the ArsTechnica article that the finance employee in Hong Kong was complicit. Yes, that’s possible, but voicing such speculations is fraught with danger, because irrespective of whether it proves to be true or false, such actions have smeared many an individual and resulted in some taking their own life out of a sense of public shaming. If the Hong Kong employee was duped, he or she should be supported, not shamed. It points to a significant area of weakness in organizational processes and systems that the multinational company will need to address, along with everyone else.
  2. Requests for secret transfers of money to new bank accounts should be an immediate red flag, irrespective of the person asking for this to happen. For any organization that doesn’t have a policy on this type of request, along a strong authorization process that applies in such cases, fraud and other types of questionable behavior will only continue to succeed.
  3. From a tech perspective, this highlights the need for using authorized apps only, enforcing strong identity security controls, and recording and archiving online meeting content for subsequent review.
]]>
https://ostermanresearch.com/2024/02/10/deepfake-meeting-scams/feed/ 0 4586