Explore promptfoo CLI commands for LLM testing - run evaluations, generate datasets, scan models for vulnerabilities, and automate testing workflows via command line| www.promptfoo.dev
Comprehensive catalog of red team attack strategies for systematically identifying and exploiting LLM application vulnerabilities| www.promptfoo.dev
Red team custom AI security tests by implementing specialized generator and grader components to detect vulnerabilities in your unique system architecture| www.promptfoo.dev
Evaluate and secure LLM applications with automated testing, red teaming, and benchmarking. Compare outputs across 50+ providers.| www.promptfoo.dev
Contact Promptfoo for enterprise AI security solutions. Schedule a demo or speak with our sales team about red teaming, guardrails, and compliance.| www.promptfoo.dev
Enable Model Context Protocol (MCP) integration for enhanced tool use, persistent memory, and agentic workflows across providers| www.promptfoo.dev
The BOLA (Broken Object Level Authorization) red teaming plugin is designed to test an AI system's vulnerability to attacks that attempt to access or modify resources belonging to other users or outside the authorized scope (OWASP API 1).| www.promptfoo.dev
This page documents categories of potential LLM vulnerabilities and failure modes.| www.promptfoo.dev
The Harmful Content plugin is designed to test an AI system's ability to handle and respond appropriately to various types of harmful or sensitive content. This plugin covers a wide range of potentially problematic areas, from violent crimes to misinformation.| www.promptfoo.dev
The Divergent Repetition red teaming plugin is designed to test whether an AI system can be manipulated into revealing its training data through repetitive pattern exploitation that causes model divergence.| www.promptfoo.dev
Promptfoo is an open-source tool for red teaming gen AI applications.| www.promptfoo.dev
The BFLA (Broken Function Level Authorization) red teaming plugin is designed to test an AI system's ability to maintain proper authorization controls for specific functions or actions (OWASP API 5).| www.promptfoo.dev
LLM red teaming is a way to find vulnerabilities in AI systems before they're deployed by using simulated adversarial inputs.| www.promptfoo.dev