One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do …| Simon Willison’s Weblog
Series: Prompt injection| Simon Willison’s Weblog
A popular nightmare scenario for AI is giving it access to tools, so it can make API calls and execute its own code and generally break free of the constraints of its initial environment.| til.simonwillison.net
Your Cookie Preferences| www.robustintelligence.com