Jim the AI Whisperer
  • About

  • Services & Testimonials

  • Prompts

  • Research

  • Hacks

  • Hire Me/Contact

As a red-team prompt engineer and system prompt sleuth, Jim has spent five years poking holes in safety claims and fighting for greater transparency in AI.

All of Jim's investigations are grounded in a single question: if AI models are so safe, why do they need to hide so much? Here are some of Jim's top exposés, jailbreaks, and prompt injections that pop the hood.

Jim's Top Posts on Hacks and Jailbreaks

OpenAI's Router-Switching Cripples GPT-5 and Sabotages Prompt Testing

OpenAI's hidden model routing system secretly switches ChatGPT conversations to undisclosed safety model "gpt-5-chat-safety"

Google Calendar Hack Exposes Gemini AI's Secret System Instructions

Using Gemini’s API link to Calendar to leak system prompts, and uncover secret user profiling rules and hidden endorsements.

HELP TO SUPPORT MY ETHICAL HACKS

If you appreciate my work exposing AI safety gaps & system instructions and want to see more, please click the yellow button!