In this blog, we’re taking a closer look at three of the top GenAI security risks: prompt injections, supply chain vulnerabilities, and improper output handling.| protectai.com
I gave a talk on Wednesday at the Bay Area AI Security Meetup about prompt injection, the lethal trifecta and the challenges of securing systems that use MCP. It wasn’t …| Simon Willison’s Weblog
If you are a user of LLM systems that use tools (you can call them “AI agents” if you like) it is critically important that you understand the risk of …| Simon Willison’s Weblog
LLM red teaming is a way to find vulnerabilities in AI systems before they're deployed by using simulated adversarial inputs.| www.promptfoo.dev