Generative AI – Osterman Research https://ostermanresearch.com Insightful research that impacts organizations Mon, 10 Mar 2025 17:28:26 +0000 en-US hourly 1 https://i0.wp.com/ostermanresearch.com/wp-content/uploads/2021/01/cropped-or-site-icon.png?fit=32%2C32&ssl=1 Generative AI – Osterman Research https://ostermanresearch.com 32 32 187703764 Notes on our briefing with Securiti – the RSAC2024 files https://ostermanresearch.com/2024/06/01/rsac2024-securiti/ https://ostermanresearch.com/2024/06/01/rsac2024-securiti/#respond Fri, 31 May 2024 19:12:04 +0000 https://ostermanresearch.com/2024/06/01/rsac2024-securiti/ We attended RSAC 2024 in San Francisco from May 6-8. Our days at the conference were packed with back-to-back briefings. 

Here’s some notes on our briefing with Eric Andrews (VP, Marketing) of Securiti. The briefing was organized by Eric. 

Key takeaways from the briefing and some subsequent research:

  • Securiti has been in business for almost five years.
  • Their key focus is driving convergence around data, so that it can be used for making decisions (“data intelligence”). Eric said that the senior leaders they speak with are particularly vocal on the pain / problem of disconnected data, because it makes important aspects of running a business much more difficult, e.g., shaping understanding, enabling cross-team collaboration, and making decisions.
  • The emergence of generative AI and its impact on the creation of more unstructured data has made the problem of disconnected data worse.
  • Securiti has created a knowledge graph for creating an overall understanding of what data is available inside the organization, where it is located, who has access to it, and which regulations are applicable. Eric talks about this as the underlying platform for Securiti and its customers, on top of which multiple use cases can be built. Securiti doesn’t create an aggregated store of all data in the organization; rather it creates a graph of all data while leaving it where it is and subject to the access controls already established.
  • “Shadow AI” is the next frontier of “shadow something” in organizations. It is easy for people / groups to use whatever AI systems they want. It is harder for organizations to have guardrails, oversight, and where necessary, control.
  • Securiti announced an LLM firewall the week before RSAC (see press release) – well, actually three firewalls in one product. See the diagram below (the flow runs right to left and back again). The Securiti LLM Firewall protects three processes of using prompt-based LLMs – the initial prompt, the retrieval of data, and the release of data to the requesting user / process. The initial prompt must traverse the prompt firewall, and this blocks threats such as prompt injection, the inclusion of sensitive data in prompts, and attempts to bypass security guardrails. The retrieval firewall ensures that only data the user has access to is used in formulating an answer to the prompt, redacts sensitive data, and checks for data poisoning. Finally, the response firewall does a final check before data is presented to the user / process to redact sensitive information and prevent the release of toxic content or prohibited topics.
  • Securiti published a report on securing generative AI applications, which explores the threat to generative AI and LLMs and where its firewall plays.
  • Securiti has a much broader set of product capabilities for data discovery, intelligence, and governance. We didn’t have time to explore the full product set.

For more, see Securiti.

]]>
https://ostermanresearch.com/2024/06/01/rsac2024-securiti/feed/ 0 4607
Recent news – April 8 https://ostermanresearch.com/2024/04/08/news20240408/ https://ostermanresearch.com/2024/04/08/news20240408/#respond Mon, 08 Apr 2024 11:00:00 +0000 https://ostermanresearch.com/2024/04/08/news20240408/ What we’ve been reading:

  • How AI is fuelling frighteningly effective scams
    Reviews areas of malicious use of AI technology for voice and video scams, along with a brief mention of phishing. Looks at how standard AI tools can be “weaponized by criminals to create realistic yet bogus voices, websites, videos and other content to perpetrate fraud,” and why voice cloning is a particular problem for the financial services industry that has built transaction authorization around voice signatures for years.
    AARP
  • Time to click for phishing links
    KnowBe4 runs the numbers on when people click on links in phishing emails. Two findings stood out: first, links in phishing messages received Monday to Friday are routinely clicked by 20% or more of users, and second, more than half of users click to open phishing emails within 60 minutes of receiving the message. KnowBe4 Blog
  • The impact of generative AI on the efficacy of security awareness training
    Explores the impact of generative AI on the performance of security awareness training, emphasizing the changing dynamics as cybercriminals leverage generative AI services to remove traditional signals of compromise (e.g., spelling mistakes, poor language use) and weaponize the services to deliver individually-targeted phishing messages based on a social media profile. SCMagazine
]]>
https://ostermanresearch.com/2024/04/08/news20240408/feed/ 0 4590