Researchers found more methods for tricking an AI assistant into aiding sensitive data theft. The post Google Patches Gemini AI Hacks Involving Poisoned Logs, Search Results appeared first on SecurityWeek.| SecurityWeek
Generative AI and LLM technologies have shown […]| hn security
Security researchers documented a prompt injection vulnerability in an agent created with Copilot Studio that allowed the exfiltration of customer data. Microsoft has fixed the problem, but the researchers figure that natural language prompts and the way that AI responds means that other ways will be found to cause agents to do silly things. Microsoft 365 tenants need to think about the deployment and management of agents.| Office 365 for IT Pros
We examine an LLM jailbreaking technique called "Deceptive Delight," a technique that mixes harmful topics with benign ones to trick AIs, with a high success rate. We examine an LLM jailbreaking technique called "Deceptive Delight," a technique that mixes harmful topics with benign ones to trick AIs, with a high success rate.| Unit 42
Developers should be using OpenAI roles to mitigate LLM prompt injection, while pentesters are missing vulnerabilities in LLM design.| Include Security Research Blog