Misinformation from LLMs poses a core vulnerability for applications relying on these models. Misinformation occurs when LLMs produce false or misleading information that appears credible. This vulnerability can lead to security breaches, reputational damage, and legal liability. One of the major causes of misinformation is hallucination—when the LLM generates content that seems accurate but is […]| OWASP Gen AI Security Project
Sensitive information can affect both the LLM and its application context. This includes personal identifiable information (PII), financial details, health records, confidential business data, security credentials, and legal documents. Proprietary models may also have unique training methods and source code considered sensitive, especially in closed or foundation models. LLMs, especially when embedded in applications, risk […]| OWASP Gen AI Security Project
A Prompt Injection Vulnerability occurs when user prompts alter the LLM’s behavior or output in unintended ways. These inputs can affect the model even if they are imperceptible to humans, therefore prompt injections do not need to be human-visible/readable, as long as the content is parsed by the model. Prompt Injection vulnerabilities exist in how […]| OWASP Gen AI Security Project
The OWASP Top 10 for LLM and Generative AI project , genai.owasp.org, team is thrilled to unveil the Gen AI Red Teaming Guide which provides a practical approach to evaluating LLM and Generative AI vulnerabilities—a new resource from our Red Teaming Initiative. As Generative AI technologies like the Large Language Models (LLMs) evolve at breakneck speed, the […] The post Announcing the OWASP Gen AI Red Teaming Guide appeared first on OWASP Gen AI Security Project.| OWASP Gen AI Security Project
As OWASP’s Agentic Security Initiative (ASI) gains momentum, its impact is already being felt across the AI security landscape. The Agentic AI – Threats and Mitigations taxonomy is now powering real-world developer tools that embed security into the workflows of AI builders and red teams. In this post, we highlight three standout tools—PENSAR, SPLX.AI Agentic Radar, and AI&ME—that are adopting the OWASP ASI taxonomy to help teams test, defend, and build secure agentic systems. This gr...| OWASP Gen AI Security Project
OWASP Gen AI Incident & Exploit Round-up, Q2 (Mar-Jun) 2025 About the Round-up This is not an exhaustive list, but a semi-regular blog where we aim to track and share insights on recent exploits involving or targeting Generative AI. Our goal is to provide a clear summary of each reported incident, including its impact, a […] The post OWASP Gen AI Incident & Exploit Round-up, Q2’25 appeared first on OWASP Gen AI Security Project.| OWASP Gen AI Security Project
New Strategic Partnership with OWASP and the OWASP Gen AI Security Project Includes Joint Content, Events, and Research Initiatives NEW YORK, NY, UNITED STATES, June 26, 2025 /EINPresswire.com/ — CyberRisk Alliance (CRA), a business intelligence company serving the cybersecurity ecosystem, today announced a new strategic partnership with the Open Worldwide Application Security Project (OWASP Foundation), […] The post CyberRisk Alliance and OWASP Join Forces to Advance Application Security...| OWASP Gen AI Security Project
LLM01:2025 Prompt Injection| OWASP Gen AI Security Project
Creating an insecure agent is surprisingly easy. There are new tools and frameworks available that make creating AI Agents relatively simple. However, AI Agents are prone to several threats outlined in the recent Agentic AI – Threats and Mitigations guide that was released in February. The OWASP Gen AI Security Project's recently put on a hackathon in NYC with the goal of building insecure agents. In this blog post we recap the event and the most common security findings we saw from the sub...| OWASP Top 10 for LLM & Generative AI Security
As AI systems begin interacting with live tools and data via the Model Context Protocol (MCP), new security risks emerge that traditional approaches can’t fully address. This post summarizes key insights from the OWASP GenAI Security Project’s latest research on securing MCP, offering practical, defense-in-depth strategies to help developers and defenders build safer agentic AI applications in real time. The post Securing AI’s New Frontier: The Power of Open Collaboration on MCP Securit...| OWASP Top 10 for LLM & Generative AI Security
WILMINGTON, Del., April 17, 2025 — The Open Worldwide Application Security Project’s (OWASP) flagship Generative AI Security Project (https://genai.owasp.org) today announced the addition of nine new sponsors, signaling continued momentum and investment in advancing the state of security for generative AI technologies. The new sponsors—Acuvity, ActiveFence, ByteDance, Cobalt, Protecto, SplxAI, Trend Micro, Troj.AI and Unbound Security—represent a […] The post OWASP Gen AI Security...| OWASP Top 10 for LLM & Generative AI Security
WILMINGTON, Del. — March 27, 2025 — The Open Worldwide Application Security Project (OWASP) announced today that its OWASP Top 10 for LLM and Generative AI List has become The OWASP Gen AI Security Project. The name change reflects the popularity of the initial Top 10 List and the recognition of the project’s expanded focus. […] The post OWASP Top 10 for LLM is now the GenAI Security Project and promoted to OWASP Flagship status appeared first on OWASP Top 10 for LLM & Generative AI S...| OWASP Top 10 for LLM & Generative AI Security
About the Round-up This is not an exhaustive list, but a semi-regular blog where we aim to track and share insights on recent exploits involving or targeting Generative AI. Our goal is to provide a clear summary of each reported incident, including its impact, a breakdown of the attack, relevant vulnerabilities from the OWASP Top […] The post OWASP Gen AI Incident & Exploit Round-up, Jan-Feb 2025 appeared first on OWASP Top 10 for LLM & Generative AI Security.| OWASP Top 10 for LLM & Generative AI Security
The UK Government Department for Science Innovation and Technology (DSIT) published its new voluntary Code of Practice (CoP) for the Cyber Security of AI today, January 31. Based upon 13 principles, the CoP clarifies the responsibilities of different AI stakeholders and is, for the first time, structured alongside the typical AI system lifecycle from planning […] The post OWASP AI Security Guidelines offer a supporting foundation for new UK government AI Security Guidelines appeared first o...| OWASP Top 10 for LLM & Generative AI Security