Research reports we didn’t write – Osterman Research https://ostermanresearch.com Insightful research that impacts organizations Wed, 06 Aug 2025 23:18:00 +0000 en-US hourly 1 https://i0.wp.com/ostermanresearch.com/wp-content/uploads/2021/01/cropped-or-site-icon.png?fit=32%2C32&ssl=1 Research reports we didn’t write – Osterman Research https://ostermanresearch.com 32 32 187703764 Some thoughts on Hoxhunt’s research on AI-powered phishing versus human-written phishing https://ostermanresearch.com/2025/05/14/hoxhunt-phishing/ Tue, 13 May 2025 20:07:23 +0000 https://ostermanresearch.com/?p=5071

Hoxhunt published a report last month called AI-Powered Phishing Outperforms Elite Red Teams in 2025. It was released in full as a blog post, so read away. No download required, no registration, just click and you’re in. The core assertion in the report is that over the past two years, AI-powered phishing has become more effective at getting a user to click a phishing link than human-written phishing messages do. Here’s the chart:

Look at the “effectiveness rate” in the first two data rows – AI (AI-generated phishing messages) goes from 2.9% in March 2023 to 2.1% in November 2024 to 2.78% in March 2025, and human (phishing messages written by an elite red team) goes from 4.2% to 2.3% to 2.25% over the same three time periods. Data row three calculates the relative differences … from AI being 31% less effective, to 10% less effective, to 23.8% more effective … for a 55% improvement in effectiveness over the three time periods.

Hoxhunt says:

  • The absolute failure rate metrics are less informative than the relative performance between the two.
  • As its AI models improved, the attacks became more sophisticated and harder to detect.
  • It’s only a matter of time until AI agents disrupt the phishing landscape, elevating the current effectiveness rate of AI-powered mass phishing to AI-powered spear phishing.
  • Organizations should cease-and-desist on compliance-based security awareness training and embrace adaptive phishing training. Hoxhunt offers the latter.

We say:

  • Neat research project. We love the emphasis on pushing the boundaries of how AI impacts phishing in a longitudinal study.
  • The absolute failure rates above are actually interesting to us – in addition to the relative change. In terms of absolute failure rates, for human-written phishing messages, we read data row two as saying that people have become almost twice as good (failure rate almost halved from 4.2% to 2.25%) at detecting human-written phishing messages from March 2023 to March 2025. Given the data is drawn exclusively from people trained by the Hoxhunt security awareness training platform, that’s interesting.
  • For the trend line in the AI phishing data row, it dropped significantly then jumped again – to almost but not quite as high as the March 2023 rate. So … the March 2023 rate set the high water mark, but people have become better at detecting AI-written messages over the three time periods. If Hoxhunt does another comparative study in 6 months, that data point will be the most interesting one to us. Do AI-generated phishing messages increase in effectiveness against people (e.g., a rate of >2.78%) or do people get better at detecting AI messages (e.g., a rate of <2.78%). This study tested how threat actors could use AI agents to write better phishing messages, but in parallel, non-threat actors are also using AI to write better emails in general. This should lift the quality of communication for all and sundry, so does the change in both smooth out the differences making detection more difficult, or does the increased prevalence of using AI to create near-perfect emails throw off signals that AI was involved.
  • It would be even more interesting to have done the same study with another cohort – those trained using what Hoxhunt calls “compliance-based security awareness training” programs.
  • In describing the methodology, Hoxhunt says “The experiment involved a large set of users (2.5M) selected from Hoxhunt’s platform, which has millions of enterprise users, providing a substantial sample size for the study” and “the AI was instructed to create phishing attacks based on the context of the user” (e.g., role, country). This is why data breaches are such a menace to the current and future phishing landscape – where threat actors aggregate data breach records to create profiles of potential targets and use AI agents to craft profile-specific phishing attacks.

What do you think?

]]>
5071
Some thoughts on CrowdStrike’s Global Threat Report 2025 https://ostermanresearch.com/2025/05/03/crowdstrike-global-threat-report-2025/ Fri, 02 May 2025 18:30:58 +0000 https://ostermanresearch.com/?p=5040

CrowdStrike published its Global Threat Report in February 2025. We have been reading it carefully over the past month. First off, many thanks to CrowdStrike for assembling this data and designing such a well-presented report. The report is rich in details and examples; we took a lot of notes based on what CrowdStrike has seen during 2024. 

Highlights:

  • CrowdStrike writes about threat actors using AI, something we have highlighted too. Key points made by CrowdStrike on the use of AI: threat actors are increasing their productivity by using AI, threat actors are “early and avid adopters” of generative AI, and it’s still early days for the weaponization of AI in malicious attacks (we don’t yet know how far it will go). CrowdStrike’s conclusion is clear through: all the evidence points to threat actors making greater use of generative AI in 2025 in multiple types of threat campaigns, e.g., social engineering, network intrusion, insider threat, and election interference.
  • The report is rich in details on the cyber threat and espionage activities of nation-state and nation-aligned actors (e.g., North Korea, China). For what CrowdStrike refers to as China-nexus adversaries, 2024 was a year in which “operations matured in capability and capacity” and involved “increasingly bold targeting, stealthier tactics, and specialized operations.” CrowdStrike tracked significant growth in intrusions from China-nexus adversaries in 2024 across all sectors, along with efforts by adversaries to obfuscate their threat operations. From an Osterman Research perspective, if there was ever a time when other nations need at least top-level defensive programs and government agencies providing point on responding to cyber activities, now is it. In this regard, the current political games around CISA are undermining national security and cybersecurity within the United States.
  • Email security is a significant research area at Osterman Research. CrowdStrike asserts that threat actors are moving away from phishing to alternative access methods for gaining a foothold into networks, with a particular emphasis on social engineering with phone calls, including callback phishing and help desk social engineering attacks. Yes, we agree that there is growth in the second, but whether that’s at the expense of the first or in combination with the first is unclear. We’d agree that threat actors are increasingly using multi-stage phishing attacks that use some combination of email, phone interaction, and an attempt to shift interactions to other less-secured apps rather than phishing by email alone.
  • The report profiles the efforts of North Korea-nexus adversaries at infiltrating organizations with IT workers. This offers access to sensitive data and system privileges for malicious purposes, as well as the salary. IT workers from North Korea that infiltrate organizations set up means of retaining access to cloud and IT resources even if their employment is terminated.
  • The threat of identity compromise is a theme in the report, with CrowdStrike indicating that attacks leveraging compromised identities are “among the most effective entry methods” and the primary initial vector for one third of all cloud incidents in 1H 2024. On page 23, CrowdStrike talks about a malvertising campaign linked with identity security compromise. You won’t get disagreement from us on the threat of compromised identity. See our recent research on MFA for our 2024 contribution to strengthening identity security. We will be extending this in 2025, as there is more to be done. 
  • The section on exploiting vulnerabilities (starts page 34) talks about exploit chaining, among other approaches used by threat actors. The report raises a fundamental implication of exploit chaining for how organizations prioritize patches. Since vulnerabilities are often analyzed by defenders in isolation based on their individual characteristics, decisions on which ones to patch and on what cadence ignore the calculus of chaining. CrowdStrike gives the example of pre-authentication vulnerabilities being patched faster than post-authentication vulnerabilities, the latter of which may be ignored altogether, which is good news for threat actors looking at vulnerabilities more holistically because of the unpatched post-auth backlog just waiting for the right conditions.
  • Also from a vulnerabilities perspective, we often see patching at a slower cadence than threat actors’ exploitation activities. CrowdStrike gives an example of this on page 39, where early exploitation activity for three vulnerabilities is detected only 24 hours after a technical blog was published providing exploitation guidance.

In conclusion, the report is excellent. It is replete with rich details. It will, mind you, take a while to read and digest fully.

What’s missing from CrowdStrike’s report?

The report is missing a major threat section and specific cybersecurity incident. The missing section would be titled “Supply chain cybersecurity risks” and the incident the one that CrowdStrike inadvertently unleashed on the world on July 19, 2024. The fallout from that incident caused disruption to some 8.5 million computers, bringing entire companies to a halt, including banks and airlines. The direct financial costs of the incident were estimated at $5.4 billion, not including the indirect and consequential costs of lost productivity and reputational damages. Organizations need protections in place against the threats and risks that CrowdStrike so well covers in its report, but at the same time, protections against single point of failure incidents that disrupt business operations around the globe.

]]>
5040
Some thoughts on Coalition’s 2024 Cyber Claims Report https://ostermanresearch.com/2025/03/25/coalition-cyber-claims-2024/ Tue, 25 Mar 2025 05:01:28 +0000 https://ostermanresearch.com/?p=4903 We recently stumbled upon the 2024 Cyber Claims Report from Coalition, an insurance provider in the United States. It was published in April 2024, so hopefully there is a new edition about to hit the streets. Several data points stood out to us:

  • Coalition asserts that “Businesses that reinforced their security controls and embraced partnership with cyber insurance providers were generally more secure than other organizations.” Coalition advocates for an “active approach to cyber risk management.”
  • 56% of claims fielded by Coalition were categorized as “funds transfer fraud” or “business email compromise.” Both types of incidents start in the email inbox, highlighting [1] the success that threat actors are achieving with financially-motivated cybercrime that starts with email, and [2] the criticality of protecting email from all types of cyberthreats.
  • Funds transfer fraud is where a business is tricked into transferring money into a fraudster’s bank account, usually based on a fraudulent email request. The average loss across 2023 (the reporting timeframe of the 2024 report) was $278,000. On page 11 of the report is a paragraph we could have written based on our recent research – “Cybersecurity trends point to threat actors using generative artificial intelligence (AI) tools to launch more sophisticated attacks. Phishing emails are becoming more credible and harder to detect, and threat actors are believed to be using AI to parse information faster, communicate more efficiently, and generate campaigns targeted toward specific companies — all of which may contribute to increases in FTF claims.” At Osterman, we’d just call this business email compromise.
  • Coalition gives the example of a client who transferred $4.9 million to a bank account in Hong Kong based on a fraudulent invoice. Through Coalition’s assistance and their coordination with the FBI and law enforcement agencies, they got all the money back.
  • In Coalition’s use of terms, business email compromise incidents, by comparison, are defined as events where a threat actor gains access to the inbox but doesn’t get direct access to funds. Instead, they use the compromised account to “wait inside the network and send phishing emails to compromise a user with direct access to money.” At Osterman, we’d call this account takeover and note its correlation with internal phishing and supply chain compromise scenarios.
  • The frequency of ransomware incidents is much lower than the high water mark of 2021, but the average cost per incident is significantly higher than 2021. In other words, fewer attacks but for more per each.

For more, get your copy from Coalition’s web site.

]]>
4903
Some thoughts on the new Ironscales report on deepfakes https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/ https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/#respond Thu, 10 Oct 2024 19:31:49 +0000 https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/ IRONSCALES released its latest threat report last week – Deepfakes: Assessing Organizational Readiness in the Face of This Emerging Cyber Threat. We wrote earlier this year about the emergence of deepfake meeting scams, so this threat report is topical and timely.

Key stats and ideas from the report:

  • 94% of survey respondents have some level of concern about the security implications of deepfakes.
  • The increasing sophistication of deepfake technologies has left many people struggling to differentiate artificially generated content from reality.
  • The worst of what deepfake-enabled threats has to offer is still yet to come. 64% of respondents believe the volume of these attacks will increase in the next 12-18 months.
  • 53% of respondents say that email is an “extreme threat” as a channel for deepfake attacks.

Our POV:

  • 94% said they had concern and about deepfakes, and so they should. We think that 100% of respondents should have been concerned. It is still very early days for the weaponization of deepfake technology, and the various ways in which this will be used by threat actors for malicious ends remains to be seen. As an industry, we don’t have a good enough grasp of the full picture yet, such as whether deepfake threats are just audio and video, whether they originate in email or whether they are subsequent attack methods in a multi-stage coordinated targeted attack, and so on.
  • Deepfakes – especially of the live audio and video kind – are a uniquely AI-enabled cyberthreat. This will demand AI-powered cybersecurity solutions to detect and respond.
  • As an industry, we’ve talked about impersonation as a threat for a long time, often in the context of vendor impersonation (for business email compromise) or domain impersonation (for phishing attacks in general). Deepfakes is several next levels up on the impersonation side. We’ll need to be careful re language though, to differentiate different types of attacks and by implication different types of approaches for detecting and stopping such attacks. It doesn’t make a lot of sense for everything that’s fake to become a “deepfake.”

And just a reminder: IRONSCALES is a client at Osterman Research. We’ve had the privilege of working with IRONSCALES on multiple research projects in recent years. We didn’t, however, have any participation in the formulation, execution, or delivery of this research.

]]>
https://ostermanresearch.com/2024/10/11/review-ironscales-deepfakes/feed/ 0 4624
Some thoughts on Cobalt’s 2024 State of Pentesting Report https://ostermanresearch.com/2024/06/29/cobalt-pentesting-2024/ https://ostermanresearch.com/2024/06/29/cobalt-pentesting-2024/#respond Fri, 28 Jun 2024 20:56:21 +0000 https://ostermanresearch.com/2024/06/29/cobalt-pentesting-2024/ Cobalt published its sixth annual report on pentesting last month (May 2024). As a company that offers pentesting as a service, Cobalt is well-positioned to leverage its aggregated data set to report on trends and findings year-on-year. The report complements Cobalt’s internal data with a large survey of cybersecurity professionals in the United States and United Kingdom.

Key findings from the report that were of interest here:

  • Cobalt conducted 4,068 manual pentesting engagement during 2023, up 31% from the 3,100 it conducted in 2022. With 400 specialist pentesters on call, this averages out at 10 per pentester per year.
  • Cobalt listed several reasons why pentest numbers increased: new regulatory compliance requirements, broadening of the attack surface, AI-generated code, the ongoing skills gaps at organizations, and budget reductions.
  • AI is one of the major trends covered in the report. There are several concerning conclusions based on Cobalt’s observations. First, tools that increase the speed of software development (including AI features) lead to an increase in the number of security vulnerabilities found, NOT to better quality software. Second, in the rush to embrace “all things AI,” security measures are often overlooked during implementation and during the subsequent changes as models learn. Third, 70% of respondents indicated they had seen evidence of external threat actors using AI to increase the quality and severity of cyberattacks.
  • The number of CVEs identified and catalogued in 2023 increased by 15% over 2022. The number of security findings discovered per Cobalt pentest engagement increased 21% in 2023 versus 2022. Some of this will be due to the increased number of CVEs, but not all of it. Cobalt’s pentesters appear to have higher efficacy at finding additional vulnerabilities, possibly due to reduced software quality via AI, better tooling from Cobalt, or more experience versus 2022.
  • Large language models (LLMs) need to be tested. Cobalt offers this is a newish service. The three most commonly found vulnerabilities during LLM pentesting engagements in 2023 were prompt injection, model denial of service attacks, and prompt leaking where sensitive information is inappropriately disclosed.
  • Organizations are taking longer to fix identified vulnerabilities and are fixing fewer of them, too. This net-nets to unaddressed vulnerabilities creating opportunities for compromise, breach, and other types of attack for a longer period of time – which is good for no one except threat actors.
  • Layoffs and budget cuts have a devastating impact on software quality and vulnerability mitigation, along with the physical health and mental wellbeing of remaining staff (with C-level respondents indicating an even higher set of negative outcomes).

For more, get your copy of Cobalt’s report.

]]>
https://ostermanresearch.com/2024/06/29/cobalt-pentesting-2024/feed/ 0 4617
Some thoughts on Fortra’s Phishing Benchmark Global Report 2023 https://ostermanresearch.com/2024/06/01/some-thoughts-on-fortras-phishing-benchmark-global-report-2023/ https://ostermanresearch.com/2024/06/01/some-thoughts-on-fortras-phishing-benchmark-global-report-2023/#respond Fri, 31 May 2024 20:09:26 +0000 https://ostermanresearch.com/2024/06/01/some-thoughts-on-fortras-phishing-benchmark-global-report-2023/ Fortra published a report presenting the findings from its phishing simulation exercise in October 2023 with around 300 organizations and 1.37 million individual participants. The press release presents the highlights. Full details are available in the report itself (registration required).

Key findings per the report:

  • On receiving the phishing simulation message, 10.4% of all recipients clicked the link. This opened a web page that masqueraded as a valid site and asked for username and password details. Of those who had clicked, 65% entered their details and lost their credentials. Here’s one of the diagrams from the report.
  • Aaarrgghhh.
  • Per Fortra, “Phishing links don’t click themselves – human beings, however well-intentioned, do.
  • Click rates varied by industry – education was worst (16.7% vs. 10.4% average), finance was best (6.3% vs. 10.4% average). There’s a full breakdown in the report.
  • The percentage of recipients-who-clicked-the-link who then submitted their password also varies by industry. Education takes worst place again – 72.8% of those who clicked lost their credentials. Finance is third from best, at 45.2%. Agriculture and food were in first place / best place – at 29.1%.
  • A decade ago, the Verizon 2013 Data Breach Investigation Report said this about the mathematics of phishing: sending 10 phishing messages almost guarantees a click. Put another way, 10%. Page 38 of the VDBIR 2013 has this box:
  • A decade later, click rates remain the same or are slightly worse.
  • Yes, users need to be trained – especially as threats become more sophisticated due to AI, phishing toolkits, MFA bypass as routine, etc. Don’t stop doing that. But … revisit / reassess / recheck the efficacy of whatever technical protections you are using and keep those phishing and BEC emails as far away from a user’s inbox as possible.
  • On that note, you should read our report on the role of AI in email security.
]]>
https://ostermanresearch.com/2024/06/01/some-thoughts-on-fortras-phishing-benchmark-global-report-2023/feed/ 0 4608
Some thoughts on Cybersixgill’s State of the Underground 2024 report https://ostermanresearch.com/2024/04/29/cybersixgill-2024/ https://ostermanresearch.com/2024/04/29/cybersixgill-2024/#respond Mon, 29 Apr 2024 04:35:05 +0000 https://ostermanresearch.com/2024/04/29/cybersixgill-2024/ We had a briefing with Cybersixgill earlier this month. To talk threat intelligence, disruption, leveraging generative AI in threat intelligence, supporting SOC analysts with AI-infused analysis, and more. Cybersixgill collects and analyzes 10 million threat signals each day for its threat intelligence service.

Cybersixgill released its annual State of the Underground report in February (read the press release for the summary and register for the full details in the report). The report itself is 52 pages in length, and covers threat actor trends across six areas, e.g., compromised credit cards, messaging platform usage, initial access.

Here’s our key takeaways:

  • Compromised credit cards less of a problem
    The market for compromised credit cards has collapsed over the past 5 years, from 140 million cards in 2019 to 12 million in 2023. Improved fraud detection and prevention is a key contributor to this change.
  • Less activity on underground forums and messaging apps
    Threat actors are making less use of underground forums and messaging apps, e.g., Telegram. However, much of this is due to significantly less activity by right-wing extremist groups and the disbandment of popular forums.
  • Vulnerabilities need to be paired with likelihood of exploit to be meaningful in defensive strategies
    There were 7 CVEs introduced in 2023 that scored the highest marks for likelihood of being exploited within the next 90 days. MOVEit Transfer was in first place. In the top 10, half were for Microsoft products.
  • Stealer malware continues to get worse
    Stealer malware grew in popularity in 2023, with 617 new types of malware (including stealers) mentioned on underground forums. Raccoon Stealer had >50% market share in 2023.
  • Availability of compromised endpoints for sale increased, too
    The number of compromised endpoints increased (almost doubled, actually), which is problematic since they can be used for data theft, lateral movement, botnet recruitment, and more.
  • Ransomware attack volumes were down, but ransom payouts up significantly
    Fewer attacks (by around 10%) combined with significantly higher ransom payouts (almost doubled) means ransomware continues to be a significant threat. While the likelihood of being targeted went down, for those that are targeted and compromised, costs are much higher.

Thanks to Cybersixgill for assembling such a good resource.

]]>
https://ostermanresearch.com/2024/04/29/cybersixgill-2024/feed/ 0 4595
Some thoughts on AvePoint’s AI and Information Management Report 2024 https://ostermanresearch.com/2024/04/24/avepoint-ai/ https://ostermanresearch.com/2024/04/24/avepoint-ai/#respond Wed, 24 Apr 2024 03:48:47 +0000 https://ostermanresearch.com/2024/04/24/avepoint-ai/ AvePoint recently published its inaugural AI and Information Management Report: The Data Problem That’s Stalling AI Success (no registration required). The report is based on a survey of 762 respondents across 16 countries. AvePoint, AIIM, and CIPL were involved in the production of the report.

Takeaways of note:

  • AI success starts with data success
    The core assertion of the report is that if an organization’s data isn’t ready for training AI models (data must be clean, organized, and accessible), then AI solutions will be hampered in delivering business results. AI solutions will repeat – or amplify – insufficient data, poor logic, and give bad outputs.
  • Data success requires information management disciplines
    Organizations already doing well with IM disciplines will be more likely to see early and sustained success with AI investments. Those without mature IM disciplines would be better to address shortcomings here before jumping prematurely on the AI bandwagon.
  • Differentiate between short-term and long-term success of AI
    Short-term success is productivity, efficiency, and other quantifiable metrics. Long-term success requires assessing data quality over time that can be used to train AI models.
  • Employee training is essential
    Employees need training and upskilling on how to use AI in their jobs, plus how to recognize when AI solutions are producing poor quality outputs. The report suggests that the training budget for AI should be around 40% of the total AI budget.

AvePoint offers data points from the survey along with recommendations for organizations looking to embark on their AI journey.

]]>
https://ostermanresearch.com/2024/04/24/avepoint-ai/feed/ 0 4594
Some thoughts on Perception Point’s 2024 Annual Report on cybersecurity trends and insights https://ostermanresearch.com/2024/04/17/perception-point-annual-report-2024/ https://ostermanresearch.com/2024/04/17/perception-point-annual-report-2024/#respond Wed, 17 Apr 2024 00:58:29 +0000 https://ostermanresearch.com/2024/04/17/perception-point-annual-report-2024/ Perception Point recently published its 2024 annual report on cybersecurity trends and insights, reporting on data and trends seen from its data sets during 2023. You can get a copy from Perception Point (registration required).

There are some useful data points in the report. These stood out:

  • 20% illegitimacy rate
    1 in 5 emails are not legitimate. That is, 80% make good business sense within the work flow of a given individual. 20% don’t.
  • 70% of attacks are phishing; huge increase in BEC attacks
    Phishing attacks remain the most frequently observed threat type, at 70% within the Perception Point data. In the FBI’s data from 2023 – based on a different data set of incidents reported to the FBI’s IC3 unit – it was 34% phishing (299K phishing out of 880K total incidents). Perception Point also reported a massive increase in the number of BEC attacks it identified, to 18.6% of all attacks. Per the FBI data, BEC occurs less frequently but is significantly more costly than plain phishing attacks.
  • AI in email attacks
    2023 was defined by the advances and widespread usability of generative AI … and its use in more intricate and deceptive malicious campaigns.” They even quote our report on The Role of AI in Email Security (which they co-sponsored).
  • Details on attacks against SaaS apps, such as Zendesk and Salesforce
    Perception Point protects users from threats, irrespective of where they come from. Email was the starting point. Collaboration and SaaS apps followed. The report dives into some of the forms that attacks against Zendesk and Salesforce take (among others), and why organizations need security protections over uploaded content and shared URLs via these services.
  • Hospitality sector under attack
    “Phishing attacks against the hospitality sector are often focused on stealing the Booking.com login credentials for a given hotel – so they can then access hotel profiles and acquire guest information, including emails, phone numbers, and financial details – for use in large-scale phishing campaigns.”
]]>
https://ostermanresearch.com/2024/04/17/perception-point-annual-report-2024/feed/ 0 4593
Some thoughts on NetLine’s 2024 State of B2B Content Consumption and Demand Report https://ostermanresearch.com/2024/04/17/netline2024/ https://ostermanresearch.com/2024/04/17/netline2024/#respond Wed, 17 Apr 2024 00:27:14 +0000 https://ostermanresearch.com/2024/04/17/netline2024/ NetLine published its annual report on buyer-level intent data. The data is based on the 6.2 million fully-permissioned, first-party leads the company acquired in 2023. You can get a copy from the NetLine site (registration required).

Some of the standouts:

  • eBooks were the most popular content form for B2B professionals (39.5% of all demand). But this format most often led to other types of subsequent content (book summaries, software, courses) that were less likely to be associated with a buying decision within 12 months.
  • The two content types the most strongly aligned with purchase intent within the next 12 months were playbooks and case studies. The other four types with strong alignment were trend reports, analyst reports, white papers, and live webinars.
  • NetLine tracked a significant increase in interest for AI-related content, which was 5x higher in 2023 than the previous year, and is on track in 2024 to be almost twice as high as 2023.

]]>
https://ostermanresearch.com/2024/04/17/netline2024/feed/ 0 4592