Kai Aizen

The Jailbreak Chef

Security Researcher & 5x CVE Holder

About Me

I'm Kai Aizen, also known as SnailSploit or The Jailbreak Chef. I specialize in adversarial AI security, with a focus on LLM jailbreaking, prompt injection attacks, and AI system vulnerabilities.

As an independent security researcher, I discover novel attack vectors in AI systems and develop systematic methodologies for security assessment. My work bridges offensive security research with practical defense strategies.

Credentials & Recognition

  • 5x CVE Holder - Discovered and responsibly disclosed 5 WordPress plugin vulnerabilities
  • Framework Creator - Developed AATMF (Adversarial AI Threat Modeling Framework) and P.R.O.M.P.T methodologies
  • Published Author - Author of "Adversarial Minds: The Anatomy of Social Engineering and the Psychology of Manipulation"
  • Magazine Contributor - Technical articles published in Hakin9 Magazine
  • Wordfence Researcher - Listed on Wordfence threat intelligence researcher registry

Research Focus

My research spans several key areas of AI and traditional security:

AI/LLM Security

  • Jailbreak Research - Discovering novel jailbreak techniques including context manipulation and multi-turn attacks
  • Prompt Injection - Researching indirect prompt injection, custom instruction backdoors, and MCP vulnerabilities
  • Agentic AI Security - Analyzing attack surfaces in RAG systems and AI agent architectures
  • Memory Manipulation - Exploring context window poisoning and persistent AI attacks

Traditional Security

  • WordPress Vulnerabilities - 5 CVE disclosures in WordPress plugin ecosystem
  • Container Security - Container escape techniques and runtime attestation
  • Cloud Security - Cloud vulnerability exploitation and defense
  • EDR Evasion - Endpoint detection bypass techniques

My CVE Portfolio

I've responsibly disclosed 5 vulnerabilities in WordPress plugins, all documented on NVD and MITRE:

Notable Research & Publications

  • Context Inheritance Exploit - Discovered that jailbroken states persist across GPT sessions
  • Custom Instruction Backdoor - Uncovered emergent prompt injection via ChatGPT settings
  • MCP Security Analysis - Comprehensive threat analysis of Model Context Protocol
  • Memory Manipulation Attacks - Research on poisoning AI context windows

Security Frameworks

AATMF - Adversarial AI Threat Modeling Framework

A comprehensive methodology for threat modeling AI systems with 14 tactics, 40+ techniques, and quantitative risk scoring. Integrates with MITRE ATT&CK for enterprise threat modeling.

Learn more about AATMF →

P.R.O.M.P.T Framework

A systematic approach to prompt engineering covering Purpose, Results, Obstacles, Mindset, Preferences, and Technical considerations for effective AI interactions.

Learn more about P.R.O.M.P.T →

SEF - Social Engineering Framework

Structured methodology for social engineering assessments combining psychology principles with practical attack simulations. (Coming soon)

Learn more about SEF →

Connect With Me

I'm always interested in discussing AI security research, collaboration opportunities, and speaking engagements.

Publications & External Profiles

Security Magazine Publications

Research Platforms

Industry Recognition

  • Wordfence Threat Intelligence: Security Researcher Profile - CVE discovery history and vulnerability research registry
  • National Vulnerability Database (NVD): 5 CVE disclosures in WordPress plugin ecosystem with full vulnerability analysis and impact scoring

Professional Profiles