Here's yet another example of a lethal trifecta attack, where an LLM system combines access to private data, exposure to potentially malicious instructions and a mechanism to communicate data back …| Simon Willison’s Weblog
If you are a user of LLM systems that use tools (you can call them “AI agents” if you like) it is critically important that you understand the risk of …| Simon Willison’s Weblog
I presented an invited keynote at the AI Engineer World’s Fair in San Francisco this week. This is my third time speaking at the event—here are my talks from October …| Simon Willison’s Weblog
A team of researchers at Invariant Labs, based in Zurich, Switzerland, has warned developers of a prompt injection […]| DEVCLASS