102 posts tagged ‘prompt-injection’. Prompt Injection is a security attack against applications built on top of Large Language Models, introduced here and further described in this series of posts.| Simon Willison’s Weblog
As more people start hacking around with implementations of MCP (the Model Context Protocol, a new standard for making tools available to LLM-powered systems) the security implications of tools built …| Simon Willison’s Weblog
We have discovered a critical vulnerability in the Model Context Protocol (MCP) that allows for| invariantlabs.ai
This blog post demonstrates how an untrusted MCP server can attack and exfiltrate data from an agentic system that is also connected to a trusted WhatsApp MCP instance, side-stepping WhatsApp's encryption and security measures.| invariantlabs.ai
Spoiler: it doesn’t. But it should.| Medium
I really want an AI assistant: a Large Language Model powered chatbot that can answer questions and perform actions for me based on access to my private data and tools. …| Simon Willison’s Weblog
Riley Goodside, yesterday: Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions. pic.twitter.com/I0NVr9LOJq- Riley Goodside (@goodside) September 12, 2022 Riley provided several examples. Here’s …| Simon Willison’s Weblog