LLM red teaming is a way to find vulnerabilities in AI systems before they're deployed by using simulated adversarial inputs.| www.promptfoo.dev