AI security researchers have designed a technique that can speedily jailbreak large language models (LLMs) in an automated fashion.| Help Net Security