In this blog post, we’ll break down the opportunities and security challenges surrounding multi-model LLMs today and in the future.| protectai.com
I keep seeing people use the term “prompt injection” when they’re actually talking about “jailbreaking”. This mistake is so common now that I’m not sure it’s possible to correct course: …| Simon Willison’s Weblog