Prompt injection remains one of the biggest open security challenges for AI and LLM-powered systems in the enterprise. If you’ve been following my writing, you know I’ve explored how indirect injections, AI agents, and MCP servers multiply the surface area for these attacks. Each new agent or server is another potential entry point for malicious instructions to sneak past guardrails.| ceposta Technology Blog
We know building MCP servers are where everyone’s mind is when it comes to AI agents. That is, if you’re going to build useful AI agents, they will need access to enterprise data, tools, and context. Enterprise companies are scrambling to figure out what this means. Does this mean they build MCP servers instead of APIs? Which vendors’ MCP servers do they use? How do they secure these flows? How do they govern any of this?| ceposta Technology Blog
As organizations start to deploy AI agents in earnest, we are discovering just how easy it is to attack these kind of systems. I went into quite some detail about how “natural language” introduces new attack vectors in one of my recent blogs. These vulnerabilities aren’t merely theoretical. We’ve seen how a malicious Model Context Protocol (MCP) server could trick AI agents into leaking sensitive data like WhatsApp chat histories and SSH keys without user awareness. An Agent Mesh lays...| ceposta Technology Blog