Indirect prompt injection can poison long-term AI agent memory, allowing injected instructions to persist and potentially exfiltrate conversation history. The post When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory appeared first on Unit 42.| Unit 42
We examine security weaknesses in LLM code assistants. Issues like indirect prompt injection and model misuse are prevalent across platforms. The post The Risks of Code Assistant LLMs: Harmful Content, Misuse and Deception appeared first on Unit 42.| Unit 42