snailsploit[$]
⌘K live
about
kai aizen
independent
independent
offensive security · ai
since 2018

kai aizen.
operator.

I break production systems — Linux kernel, Kubernetes, container runtimes, OSS libraries, and the LLMs increasingly woven through them — then publish the methodology. Frameworks for structured adversarial-AI red teaming. Tooling for systematic vulnerability discovery. Books and articles for the human layer.

01 · bio
The short version. Long version is in Adversarial Minds.

background.

I work at the intersection of offensive security and AI. The shorthand: same attack, different substrate. The techniques that compromise human reasoning compromise machine reasoning for related reasons, and the techniques that compromise machine systems often start with a human in the loop.

Career path is unusual: ten years on the systems side (Linux kernel, Kubernetes, container runtimes, OSS supply chain), then a hard pivot into adversarial AI when it became clear the same operating mode applied. The frameworks — AATMF, SEF, P.R.O.M.P.T — are the artifact of that pivot. They are how I operate; publishing them is how I keep them honest.

Day-to-day: original research, frameworks, tooling, and a small number of high-trust engagements with organizations that need adversarial coverage at the model layer. I don't do volume. I do the work that needs doing.

roleIndependent offensive security researcher
scopeLinux kernel · Kubernetes · container runtimes · OSS · LLMs
frameworksAATMF · SEF · P.R.O.M.P.T
authorAdversarial Minds
contributingHakin9 · MITRE/NVD · Linux kernel mainline
locationindependent · remote
02 · receipts
The work, not the titles. Numbers refresh as patches ship and CVEs publish.

receipts.

03 · principles
The non-negotiable bits. Mostly about what I won't do.

how I work.

Coordinated disclosure, always

Every CVE, every advisory, every framework citation goes through the maintainer first. No public 0-days for clout. No vendor-side surprises that put users at risk in the disclosure window.

Open frameworks

AATMF, SEF, and P.R.O.M.P.T are CC BY-SA. The toolkit is Apache 2.0. If a method is worth using, it's worth being public.

No volume engagements

Customer engagements are small in number and high in trust. If you're shopping for the cheapest red team you can find, this isn't the right shop.

No security theater

Findings are operational or they don't ship. Reports describe attack mechanics that reproduce, scored on a published rubric (AATMF-R), mapped to standards your compliance team already uses.

Independent

No corporate parent. No platform incentives. Research direction is set by what's interesting and operationally useful, not by what generates leads for a sales team.

The work is the product

I'd rather publish one piece that lands than ten that don't. The blog is sparse on purpose.

work
Services →
AI red team · advanced PT · social engineering
Research →
43 pieces · adversarial AI
Frameworks →
AATMF · P.R.O.M.P.T · SEF
Adversarial Minds →
book on offensive psychology
profiles
in linkedin.com/in/kaiaizen gh github.com/SnailSploit RG researchgate.net/Kai-Aizen-2 x x.com/SnailSploit