Relies on a slightly customized fork of the InvokeAI Stable Diffusion code: Code Repo Multiple prompts at once: Enter each prompt on a new line (newline-separated). Word wrapping does not count ...
Jailbreakbench is an open-source robustness benchmark for jailbreaking large language models (LLMs). The goal of this benchmark is to comprehensively track progress toward (1) generating successful ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results
Feedback