From Microsoft 365 Copilot to Bing to Bard, everyone is racing to integrate LLMs with their products and services. But before you get too excited, I have some bad news for you: Deploying LLMs safely will be impossible until we address prompt injections. And we don’t know how. Introduction# Remember prompt injections? Used to leak initial prompts or jailbreak ChatGPT into emulating Pokémon? Well, we published a preprint View the preprint on ArXiV: More than you’ve asked for: A Comprehensi...