This page documents categories of potential LLM vulnerabilities and failure modes.| www.promptfoo.dev
LLM red teaming is a way to find vulnerabilities in AI systems before they're deployed by using simulated adversarial inputs.| www.promptfoo.dev