Add Decrypt as your preferred source to see more of our stories on Google. AI jailbreaking is the practice of writing prompts that bypass safety training in models like ChatGPT, Claude, and Gemini.