We study how to persuade LLMs to jailbreak them and advocate for more fundamental mitigation for highly interactive LLMs| chats-lab.github.io