Roast topics
Find topics
Roast it!
Roast topics
Find topics
Find it!
Login
From:
The Stack
(Uncensored)
subscribe
LLMs can be trivially backdoored - minimal poison dose required
https://www.thestack.technology/llms-can-be-trivially-backdoored-minimal-poison-dose-required/
links
backlinks
Tagged with:
ai
llms
prompt injection
Large Language Models can be backdoored by introducing just a limited number of “poisoned” documents during their training, say researchers.