whoami
KAI AIZEN
THE JAILBREAK CHEF
Security Researcher & 5x CVE Holder. Creator of the AATMF and P.R.O.M.P.T frameworks. Specializing in LLM jailbreaking, prompt injection, and adversarial AI.
Security Researcher & 5x CVE Holder. Creator of the AATMF and P.R.O.M.P.T frameworks. Specializing in LLM jailbreaking, prompt injection, and adversarial AI.
Discovering how jailbroken states persist across GPT sessions through context inheritance.
Uncovering emergent prompt injection risks through ChatGPT custom instructions.
Comprehensive security analysis of the Model Context Protocol vulnerabilities.
Adversarial AI Threat Modeling Framework. 14 tactics, 40+ techniques, quantitative risk scoring.
Systematic prompt engineering methodology. Purpose, Results, Obstacles, Mindset, Preferences, Technical.
Social Engineering Framework. Structured methodology for SE assessments. Coming soon.