Learn the critical difference between prompt injection and jailbreaking attacks, with real CVEs, production defenses, and test configurations.| Promptfoo Blog
AI safety vs AI security for LLM apps. Clear examples, test configs, and OWASP-aligned defenses so teams prevent harmful outputs and block adversaries.| Promptfoo Blog
Compare the top open source AI red teaming tools in 2025. See features, use cases, and real differences across Promptfoo, PyRIT, Garak, DeepTeam, and Viper.| www.promptfoo.dev
We raised $18.4M from Insight Partners with participation from Andreessen Horowitz. Funding will accelerate development of the most widely adopted AI security testing solution.| www.promptfoo.dev
Watch Promptfoo catch LLM exploits live at Black Hat USA and DEF CON 33. Booth 4712, Arsenal Labs demos, CEO deep-dive, and a pool-side open bar.| Promptfoo Blog
How right-leaning is Grok? We've released a new testing methodology alongside a dataset of 2,500 political questions.| Promptfoo Blog
A comprehensive guide to AI red teaming for beginners, covering the basics, culture building, and operational feedback loops| Promptfoo Blog
Understanding LLM system cards and their importance for responsible AI deployment| Promptfoo Blog
Learn about the security risks introduced by MCP servers and how to mitigate them using the Promptfoo MCP Proxy, an enterprise solution for MCP security.| Promptfoo Blog
Learn the 10 biggest LLM security risks and practical fixes, in a 5-minute TLDR. Updated for OWASP 2025.| Promptfoo Blog
Promptfoo achieves SOC 2 Type II and ISO 27001 compliance, demonstrating enterprise-grade security for AI red teaming and LLM evaluation tools.| Promptfoo Blog
Detailed comparison of Promptfoo and Microsoft's PyRIT for LLM security testing. Covers attack methods, RAG testing, CI/CD integration, and selection criteria.| www.promptfoo.dev
Compare ModelAudit and ModelScan for ML model security scanning. Learn how comprehensive format support and detection capabilities differ between these tools.| Promptfoo Blog
Learn essential techniques for hardening AI system prompts against injection attacks, unauthorized access, and security vulnerabilities. Includes practical examples using Promptfoo evaluations.| Promptfoo Blog
Compare Promptfoo and Garak for LLM security testing. Learn how dynamic attack generation differs from curated exploits, and when to use each tool.| www.promptfoo.dev
Google Gemini handles text, images, and code - creating unique attack surfaces. Learn how to red team these multimodal capabilities and test for new vulnerabilities.| promptfoo Blog
Promptfoo is introducing our revolutionary, next-generation red teaming agent designed for enterprise-grade LLM agents.| promptfoo Blog
Promptfoo reaches 100K users! Learn about our journey from prompt evaluation to AI red teaming and what's next for AI security.| promptfoo Blog
OpenAI's latest GPT models are more capable but also more vulnerable. Discover new attack vectors and systematic approaches to testing GPT security.| promptfoo Blog
Claude is known for safety, but how secure is it really? Step-by-step guide to red teaming Anthropic's models and uncovering hidden vulnerabilities.| www.promptfoo.dev
A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data| www.promptfoo.dev
OWASP replaced DoS attacks with "unbounded consumption" in their 2025 Top 10. Learn why this broader threat category matters and how to defend against it.| www.promptfoo.dev
Not all foundation models are created equal when it comes to security. Learn what to look for in model cards and how to assess jailbreak resistance before you build.| www.promptfoo.dev
We tested DeepSeek-R1 with 1,156 politically sensitive prompts. The results reveal extensive CCP censorship and how to detect political bias in Chinese AI models.| www.promptfoo.dev