Tools#| llm.datasette.io
If you are a user of LLM systems that use tools (you can call them “AI agents” if you like) it is critically important that you understand the risk of …| Simon Willison’s Weblog
102 posts tagged ‘prompt-injection’. Prompt Injection is a security attack against applications built on top of Large Language Models, introduced here and further described in this series of posts.| Simon Willison’s Weblog
Plugin hooks#| llm.datasette.io
Executing a prompt#| llm.datasette.io