102 posts tagged ‘prompt-injection’. Prompt Injection is a security attack against applications built on top of Large Language Models, introduced here and further described in this series of posts.| Simon Willison’s Weblog
In the two and a half years that we’ve been talking about prompt injection attacks I’ve seen alarmingly little progress towards a robust solution. The new paper Defeating Prompt Injections …| Simon Willison’s Weblog
I really want an AI assistant: a Large Language Model powered chatbot that can answer questions and perform actions for me based on access to my private data and tools. …| Simon Willison’s Weblog
Activity around building sophisticated applications on top of LLMs (Large Language Models) such as GPT-3/4/ChatGPT/etc is growing like wildfire right now. Many of these applications are potentially vulnerable to prompt …| simonwillison.net
Riley Goodside, yesterday: Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions. pic.twitter.com/I0NVr9LOJq- Riley Goodside (@goodside) September 12, 2022 Riley provided several examples. Here’s …| Simon Willison’s Weblog