In a new case that showcases how prompt injection can impact AI-assisted tools, researchers have found a way to trick the GitHub Copilot chatbot into leaking sensitive data, such as AWS keys, from private repositories. The vulnerability was exploitable through comments hidden in pull requests that GitHub’s AI assistant subsequently analyzed. “The attack combined a novel CSP [Content Security Policy] bypass using GitHub’s own infrastructure with remote prompt injection,” said Omer Mayr...