Login
From:
Include Security Research Blog
(Uncensored)
subscribe
Improving LLM Security Against Prompt Injection: AppSec Guidance For Pentesters and Developers - Include Security Research Blog
https://blog.includesecurity.com/2024/01/improving-llm-security-against-prompt-injection-appsec-guidance-for-pentesters-and-developers/
links
backlinks
Tagged with:
prompt injection
llm security
security consulting
ai injection
aisec
machine learning security
minimizing risk
mitigating
mlsec
Developers should be using OpenAI roles to mitigate LLM prompt injection, while pentesters are missing vulnerabilities in LLM design.
Roast topics
Find topics
Find it!