Artificial intelligence – Osterman Research https://ostermanresearch.com Insightful research that impacts organizations Mon, 20 Oct 2025 18:15:43 +0000 en-US hourly 1 https://i0.wp.com/ostermanresearch.com/wp-content/uploads/2021/01/cropped-or-site-icon.png?fit=32%2C32&ssl=1 Artificial intelligence – Osterman Research https://ostermanresearch.com 32 32 187703764 Why trust, security, and value are essential in corporate adoption of AI – AvePoint’s new report https://ostermanresearch.com/2025/10/21/avepoint-ai-report-2025/ Mon, 20 Oct 2025 17:59:04 +0000 https://ostermanresearch.com/?p=5797

AvePoint has just published its latest report – The State of AI: Go Beyond the Hype to Navigate Trust, Security, and Value. We conducted the underlying survey (775 respondents across 18 countries) and prepared the results for the AvePoint team. To portray the breadth of the data we collected, the report clocks in at 61 pages – although there are many graphs and charts, sub-title pages, and expert perspectives throughout. Please, grab yourself a copy and have a read if AI in the enterprise is relevant to your work and future.

From the research data, we found a set of concerns around trust, security, and value that organizations will need to factor into their AI strategies. For example:

  • Inaccurate AI output (68.7%) and data security concerns (68.5%) top the list of factors for why organizations are slowing the rollout of generative AI assistants.
  • 75% of organizations experienced at least one AI-related data breach in the past year.
  • 90.6% of organizations claim effective information management programs, but only 30.3% have implemented effective data classification systems. Gaps in data governance and information management create significant obstacles to safe AI implementation.
  • 70.7% of organizational data is more than five years old, creating significant training data quality issues for AI systems.
  • Nearly 20% of organizations expect generative AI to create more than half their data within 12 months.
  • … and much, much more. This is a very data rich report.

For us, it was a tremendous opportunity to work with the AvePoint team to pull this research together. For you, we hope it provides tremendous insight and assistance as you navigate your AI journey.

Next action: get your copy of the report – The State of AI: Go Beyond the Hype to Navigate Trust, Security, and Value.

]]>
5797
2025 Cyber Survey: Application security at a breaking point – our latest report https://ostermanresearch.com/2025/06/23/radware-2025-application-security/ Sun, 22 Jun 2025 23:10:37 +0000 https://ostermanresearch.com/?p=5140

We’ve been heads-down on several major reports over the past couple of months (hence the near radio silence), and the first of those has recently been published. Please check out Radware’s 2025 Cyber Survey: Application security at a breaking point (published June 12). This is the third year running we’ve had the privilege of working on Radware’s application security research, and this year’s research extends, expands, and tightens the nature of this annual research program.

From an extend perspective, the 2025 survey had a much higher focus on the role of AI in cybersecurity – from both the offensive and defensive sides. AI in cybersecurity has become a significant research area for Osterman Research, and each research program gives us the opportunity to refine our questions and contextualize those within a specific strand of the cybersecurity matrix. As you’ll see from the findings for this research program, the threat of AI being used to intensify hacking tradecraft is of highest concern to the organizations we surveyed. There’s a common set of refrains among respondents about the effect of AI on threat evolution, detection difficulty, and growing threat diversity. Unsurprisingly, there’s also a common refrain on strengthening application security defenses via AI-based cybersecurity solutions.

From an expand perspective, the research encompassed new threat areas we haven’t looked at over the past couple of research rounds. The major addition was API business logic attacks – a new class of threat – which is already being experienced with high frequency. On page 9 of the report, we say: Business logic attacks present an ideal opportunity for threat actors to use emerging offensive AI capabilities. For example, AI agents can automate the malicious exploration of API sequencing, looking for unexpected logic vulnerabilities and loopholes to exploit. Organizations should expect hackers to develop and share newly crafted playbooks to amplify threat opportunities. Our annual diagram on the cadence of different attack types portrays good news – in that average cadence is lower than our previous data set – along with a dire warning, in that the amplification of threat actor capabilities via AI is likely to increase attack cadence over the next 12 months.

And finally, from a tighten perspective, this year’s research doubled the number of organizations surveyed to allow a deep dive focus on two specific industries (financial services and healthcare) compared to all other industries. There are cohort-to-cohort comparisons throughout the report, with the interesting findings where financial services and healthcare are different to the overall data set or the other two cohorts noted. These are oriented around different attack patterns (page 6), API usage (page 7), documentation status (page 8), among others.

Please get your copy of the full report from the Radware web site.

Join the webinar on June 26

We will be presenting the key findings from our research with Radware later this week. The webinar is on Thursday June 26 – please register to attend. We’d love to have you there.

]]>
5140
Some thoughts on Hoxhunt’s research on AI-powered phishing versus human-written phishing https://ostermanresearch.com/2025/05/14/hoxhunt-phishing/ Tue, 13 May 2025 20:07:23 +0000 https://ostermanresearch.com/?p=5071

Hoxhunt published a report last month called AI-Powered Phishing Outperforms Elite Red Teams in 2025. It was released in full as a blog post, so read away. No download required, no registration, just click and you’re in. The core assertion in the report is that over the past two years, AI-powered phishing has become more effective at getting a user to click a phishing link than human-written phishing messages do. Here’s the chart:

Look at the “effectiveness rate” in the first two data rows – AI (AI-generated phishing messages) goes from 2.9% in March 2023 to 2.1% in November 2024 to 2.78% in March 2025, and human (phishing messages written by an elite red team) goes from 4.2% to 2.3% to 2.25% over the same three time periods. Data row three calculates the relative differences … from AI being 31% less effective, to 10% less effective, to 23.8% more effective … for a 55% improvement in effectiveness over the three time periods.

Hoxhunt says:

  • The absolute failure rate metrics are less informative than the relative performance between the two.
  • As its AI models improved, the attacks became more sophisticated and harder to detect.
  • It’s only a matter of time until AI agents disrupt the phishing landscape, elevating the current effectiveness rate of AI-powered mass phishing to AI-powered spear phishing.
  • Organizations should cease-and-desist on compliance-based security awareness training and embrace adaptive phishing training. Hoxhunt offers the latter.

We say:

  • Neat research project. We love the emphasis on pushing the boundaries of how AI impacts phishing in a longitudinal study.
  • The absolute failure rates above are actually interesting to us – in addition to the relative change. In terms of absolute failure rates, for human-written phishing messages, we read data row two as saying that people have become almost twice as good (failure rate almost halved from 4.2% to 2.25%) at detecting human-written phishing messages from March 2023 to March 2025. Given the data is drawn exclusively from people trained by the Hoxhunt security awareness training platform, that’s interesting.
  • For the trend line in the AI phishing data row, it dropped significantly then jumped again – to almost but not quite as high as the March 2023 rate. So … the March 2023 rate set the high water mark, but people have become better at detecting AI-written messages over the three time periods. If Hoxhunt does another comparative study in 6 months, that data point will be the most interesting one to us. Do AI-generated phishing messages increase in effectiveness against people (e.g., a rate of >2.78%) or do people get better at detecting AI messages (e.g., a rate of <2.78%). This study tested how threat actors could use AI agents to write better phishing messages, but in parallel, non-threat actors are also using AI to write better emails in general. This should lift the quality of communication for all and sundry, so does the change in both smooth out the differences making detection more difficult, or does the increased prevalence of using AI to create near-perfect emails throw off signals that AI was involved.
  • It would be even more interesting to have done the same study with another cohort – those trained using what Hoxhunt calls “compliance-based security awareness training” programs.
  • In describing the methodology, Hoxhunt says “The experiment involved a large set of users (2.5M) selected from Hoxhunt’s platform, which has millions of enterprise users, providing a substantial sample size for the study” and “the AI was instructed to create phishing attacks based on the context of the user” (e.g., role, country). This is why data breaches are such a menace to the current and future phishing landscape – where threat actors aggregate data breach records to create profiles of potential targets and use AI agents to craft profile-specific phishing attacks.

What do you think?

]]>
5071
Using AI to Enhance Defensive Cybersecurity – our latest report https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/ https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/#respond Thu, 21 Nov 2024 23:51:07 +0000 https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/ For every topic, key enemies are hype and bluster. Hype is overinflated expectations or advocacy for something that can’t live up to what is said about it. Bluster is the aggressive and noisy positioning of something without the depth of character or capability to follow through. As a researcher, breaking through hype and disabusing bluster are core to our work.

If you’ve read any of our reports – and there’s quite a collection of them across a wide range of topics – you’ll notice that [1] they aren’t short, and [2] we try to dig into the details. Our latest report is no exception … with a hype-busting and bluster-disabusing examination into the role of AI in enhancing defensive cybersecurity. You can get a copy from our portfolio.

To gather the data, we surveyed organizations in the United States on the front lines of cybersecurity attacks. To take the survey, the respondent had to work at an organization with at least 500 employees and/or at least 50 people on their security team. We wanted to get a sense of what they were seeing in terms of changing dynamics with cybersecurity attacks, particularly the impact of offensive AI. And equally, we wanted to get a read on how they were responding to these changing attack dynamics.

We reached four key conclusions in the research:

  • Attackers have the early advantage in generative AI and GANs
    Generative AI and GANs are tipping the scales in favor of attackers, but defensive AI tools are catching up, especially in behavioral AI and supervised machine learning.
  • Integrate AI strategically into cybersecurity frameworks. Strategic integration of AI into cybersecurity frameworks is essential to fully
    leverage the technology’s potential. Organizations should focus on aligning AI investments with core business objectives and risk management practices.
  • AI is a force multiplier for cybersecurity teams. AI enables cybersecurity teams to focus on high-impact activities. However, this requires appropriate training, organizational alignment, and investment in the right tools.
  • The time for embracing AI in defensive cybersecurity is now. As AI reshapes both offensive and defensive cybersecurity, organizations must act swiftly to secure their infrastructures, adopt AI-powered defenses, and prepare their teams for the next generation of AI-enabled threats.

Do these conclusions echo what you’re seeing at your organization? Get your copy of the report if so.

This research was sponsored by Abnormal Security, IRONSCALES, and OpenText.

If your firm provides AI-powered cybersecurity solutions to offer protections against AI-enabled attacks AND you would like to spread this research to your customers and prospects, please get in contact to talk about licensing options.

]]>
https://ostermanresearch.com/2024/11/22/using-ai-to-enhance-defensive-cybersecurity-our-latest-report/feed/ 0 4626
Notes on our briefing with Securiti – the RSAC2024 files https://ostermanresearch.com/2024/06/01/rsac2024-securiti/ https://ostermanresearch.com/2024/06/01/rsac2024-securiti/#respond Fri, 31 May 2024 19:12:04 +0000 https://ostermanresearch.com/2024/06/01/rsac2024-securiti/ We attended RSAC 2024 in San Francisco from May 6-8. Our days at the conference were packed with back-to-back briefings. 

Here’s some notes on our briefing with Eric Andrews (VP, Marketing) of Securiti. The briefing was organized by Eric. 

Key takeaways from the briefing and some subsequent research:

  • Securiti has been in business for almost five years.
  • Their key focus is driving convergence around data, so that it can be used for making decisions (“data intelligence”). Eric said that the senior leaders they speak with are particularly vocal on the pain / problem of disconnected data, because it makes important aspects of running a business much more difficult, e.g., shaping understanding, enabling cross-team collaboration, and making decisions.
  • The emergence of generative AI and its impact on the creation of more unstructured data has made the problem of disconnected data worse.
  • Securiti has created a knowledge graph for creating an overall understanding of what data is available inside the organization, where it is located, who has access to it, and which regulations are applicable. Eric talks about this as the underlying platform for Securiti and its customers, on top of which multiple use cases can be built. Securiti doesn’t create an aggregated store of all data in the organization; rather it creates a graph of all data while leaving it where it is and subject to the access controls already established.
  • “Shadow AI” is the next frontier of “shadow something” in organizations. It is easy for people / groups to use whatever AI systems they want. It is harder for organizations to have guardrails, oversight, and where necessary, control.
  • Securiti announced an LLM firewall the week before RSAC (see press release) – well, actually three firewalls in one product. See the diagram below (the flow runs right to left and back again). The Securiti LLM Firewall protects three processes of using prompt-based LLMs – the initial prompt, the retrieval of data, and the release of data to the requesting user / process. The initial prompt must traverse the prompt firewall, and this blocks threats such as prompt injection, the inclusion of sensitive data in prompts, and attempts to bypass security guardrails. The retrieval firewall ensures that only data the user has access to is used in formulating an answer to the prompt, redacts sensitive data, and checks for data poisoning. Finally, the response firewall does a final check before data is presented to the user / process to redact sensitive information and prevent the release of toxic content or prohibited topics.
  • Securiti published a report on securing generative AI applications, which explores the threat to generative AI and LLMs and where its firewall plays.
  • Securiti has a much broader set of product capabilities for data discovery, intelligence, and governance. We didn’t have time to explore the full product set.

For more, see Securiti.

]]>
https://ostermanresearch.com/2024/06/01/rsac2024-securiti/feed/ 0 4607
Notes on our briefing with Cohesity – the RSAC2024 files https://ostermanresearch.com/2024/06/01/rsac2024-cohesity/ https://ostermanresearch.com/2024/06/01/rsac2024-cohesity/#respond Fri, 31 May 2024 18:54:49 +0000 https://ostermanresearch.com/2024/06/01/rsac2024-cohesity/ We attended RSAC 2024 in San Francisco from May 6-8. Our days at the conference were packed with back-to-back briefings. 

Here’s some notes on our briefing with the Cohesity team: Frank Sessions (Head of Analyst Relations), Sheetal Venkatesh (Director of Product Management), and Chris Hoff (Product Marketing Lead). The briefing was organized by the analyst relations team at Cohesity. 

Key takeaways from the briefing and some subsequent explorations:

  • Cohesity was founded 10 years ago. The company focuses on providing ways for organizations to manage, secure, and drive insights around their secondary data. This means data that is backed up, rather than primary / production data. Cohesity is highly focused on how organizations can drive insights off their secondary data – rather than it just ROTting over time.
  • Cohesity has more than 4,000 customers and serves 42% of the Fortune 100. In February, the company announced a definitive agreement to merge with the data protection business part of Veritas, creating a joint company with deep strengths in data security and management. Once the merger closes, the combined entity will have more than 10,000 customers and 3,000 partners.
  • Cohesity Data Cloud is a service for capturing, managing, securing, and protecting a customer’s secondary data. It includes capabilities for backup and archival, threat scanning, data masking, eDiscovery, and (much) more. Cohesity says that its offer of a unified platform for data management and security reduces costs for organizations by 50% and enables much faster recovery times in the case of a cyberattack. 
  • Cohesity Gaia is a new AI agent that works across the data a customer stores in the Cohesity Data Cloud. It is the next generation of insight-driven capabilities that Cohesity has created. Gaia combines LLM technology with retrieval augmented generation (RAG) technology, the latter of which searches for customer-specific content in order to provide context to a prompt. Both the context and prompt are then passed to the LLM for generating an answer. The early use cases are around legal and compliance matters, e.g., what happened in case X?, but support for additional use cases is coming. For more on Gaia, see Cohesity’s white paper.
  • Cohesity wrapped a bus and at least three taxis for the show. They seemed ever-present whenever we left the underground show floor.

For more, see Cohesity.

]]>
https://ostermanresearch.com/2024/06/01/rsac2024-cohesity/feed/ 0 4606
Notes on our briefing with Darktrace – the RSAC2024 files https://ostermanresearch.com/2024/05/24/rsac2024-darktrace/ https://ostermanresearch.com/2024/05/24/rsac2024-darktrace/#respond Thu, 23 May 2024 22:50:15 +0000 https://ostermanresearch.com/2024/05/24/rsac2024-darktrace/ We attended RSAC 2024 in San Francisco from May 6-8. Our days at the conference were packed with back-to-back briefings. 

Here’s some notes on our briefing with Mitchell Bezzina (VP, Product Marketing) and Madeline Wilson (Communications Manager) at Darktrace. The briefing was organized by Caroline Dobyns at ICR Lumina.

Our notes from the briefing (enriched with some additional research):

  • Darktrace has built its security offerings as a single platform architecture with AI a fundamental design layer. Its three solution areas are detection and response (across cloud, email, endpoint, OT, identity, and network), prevention (e.g., attack surface management), and heal (with automated playbooks for recovery).
  • UEBA (user and entity behavioral analytics) is a core part of Darktrace’s approach to assessing for security threats versus what normal behavior looks like. In its CISO Guide to Cyber AI white paper, the Darktrace team says this about their approach: ” … self-learning AI approaches learn what constitutes ‘normal’ by continuously analyzing every device, every user, and the millions of interactions between them, this type of AI can understand ‘self’ for an organization. Once it knows ‘self,’ it can piece together subtle deviations from ‘self’ and connect the dots of a cyber-attack. This way, it can adapt and evolve at the same rate as threats, identifying unfamiliar and novel attacks.”
  • Darktrace has grown significantly over the past year. It currently has over 2,300 employees spread across more than 110 countries. Annual recurring revenue in 2023 was $628.4 million.
  • Darktrace offers a Cyber AI analyst for analyzing alerts from the customer’s SIEM. The AI analyst automatically triages new alerts and offers a suggested prioritization for a human analyst. Mitchell said their AI analyst is doing an initial run through of around 90% of alerts.
  • One investment area for Darktrace is driving nuance for a compromised or threatened endpoint. While a common approach is to automatically take a compromised endpoint offline to isolate / quarantine it from other network elements, Darktrace is able to isolate the threats on the endpoint while allowing other connections to continue unhindered. This nuanced approach deals with the threat without stopping a user’s ability to work. See Darktrace/Endpoint for more- although what we call “nuanced” is called “surgical” by Darktrace. Same concept, different word.

For more, see Darktrace.

]]>
https://ostermanresearch.com/2024/05/24/rsac2024-darktrace/feed/ 0 4604
Some thoughts on AvePoint’s AI and Information Management Report 2024 https://ostermanresearch.com/2024/04/24/avepoint-ai/ https://ostermanresearch.com/2024/04/24/avepoint-ai/#respond Wed, 24 Apr 2024 03:48:47 +0000 https://ostermanresearch.com/2024/04/24/avepoint-ai/ AvePoint recently published its inaugural AI and Information Management Report: The Data Problem That’s Stalling AI Success (no registration required). The report is based on a survey of 762 respondents across 16 countries. AvePoint, AIIM, and CIPL were involved in the production of the report.

Takeaways of note:

  • AI success starts with data success
    The core assertion of the report is that if an organization’s data isn’t ready for training AI models (data must be clean, organized, and accessible), then AI solutions will be hampered in delivering business results. AI solutions will repeat – or amplify – insufficient data, poor logic, and give bad outputs.
  • Data success requires information management disciplines
    Organizations already doing well with IM disciplines will be more likely to see early and sustained success with AI investments. Those without mature IM disciplines would be better to address shortcomings here before jumping prematurely on the AI bandwagon.
  • Differentiate between short-term and long-term success of AI
    Short-term success is productivity, efficiency, and other quantifiable metrics. Long-term success requires assessing data quality over time that can be used to train AI models.
  • Employee training is essential
    Employees need training and upskilling on how to use AI in their jobs, plus how to recognize when AI solutions are producing poor quality outputs. The report suggests that the training budget for AI should be around 40% of the total AI budget.

AvePoint offers data points from the survey along with recommendations for organizations looking to embark on their AI journey.

]]>
https://ostermanresearch.com/2024/04/24/avepoint-ai/feed/ 0 4594