As a red-team prompt engineer and system prompt sleuth, Jim has spent five years poking holes in safety claims and fighting for greater transparency in AI.
All of Jim's investigations are grounded in a single question: if AI models are so safe, why do they need to hide so much? Here are some of Jim's top exposés, jailbreaks, and prompt injections that pop the hood.