snailsploit[$]
⌘K live
frameworks
open source
cc by-sa 4.0
6 frameworks
2,500+ procedures
updated 2026.05

frameworks &
tooling.

Six open frameworks for adversarial AI work. Three are flagship reference systems — AATMF, SEF, P.R.O.M.P.T. Three are operator tools — the AATMF Toolkit, the LLM Red Teamer's Playbook, and Claude-Red. All open. All extensible. All with the same goal: a structured language for breaking AI systems and the humans around them.

01 · flagship
The three reference frameworks. Every AI red team engagement we run starts here.

reference frameworks.

AATMFflagship
AATMF v3
Adversarial AI Threat Modeling Framework

15 tactics. 240 techniques. 2,150+ procedures. The structured catalog for AI red teaming — mapped to MITRE ATLAS, NIST AI RMF, and the EU AI Act.

tactics15
techniques240
procedures2,152+
licenseCC BY-SA 4.0
open framework →
SEFflagship
SEF v2
Social Engineering Framework

Adversarial psychology applied to humans. The other half of every attack chain — what AATMF does for models, SEF does for the people around them.

volumes6
tactics12
techniques180+
licenseCC BY-SA 4.0
open framework →
PROMPTflagship
P.R.O.M.P.T
Adversarial Communication Framework

Premise · Role · Output · Modulation · Persona · Tactics. A six-stage compositional grammar for adversarial prompts — generative, not just a checklist.

stages6
compositions
coverageT1–T4
licenseCC BY-SA 4.0
open framework →
system
AATMF + SEF + P.R.O.M.P.T compose a single system. AATMF maps the machine attack surface. SEF maps the human one. P.R.O.M.P.T is the grammar that operates inside both.
same attack. different substrate.
02 · tooling
Three operator-grade tools. Built for the work, not for the slide deck.

operator tooling.

TOOLKITtooling
AATMF Toolkit
Python CLI for Systematic LLM Safety Testing

Three-layer eval pipeline. Defense fingerprinting. Decay tracking. Attack-chain planning. Runs every AATMF procedure against any LLM endpoint — emits AATMF-R-scored reports.

languagePython
procedures2,152+
targetsany LLM
licenseApache 2.0
open framework →
PLAYBOOKtooling
LLM Red Teamer's Playbook
Diagnostic Methodology

Five defense layers, sequenced: input filters → alignment → identity → output → agentic trust. Diagnose which layer caught you and pivot — not a list of jailbreaks but a routing logic.

layers5
decision treeyes
formatfield manual
licenseCC BY-SA 4.0
open framework →
CLAUDE-REDtooling
Claude-Red
Offensive Security Skills Library

38 SKILL.md files curated for the Claude skills system — SQLi, shellcode, EDR evasion, exploit dev. Drop-in offensive capabilities for agent harnesses that need real adversarial coverage.

skills38
categories9
targetClaude Skills
licenseMIT
open framework →
03 · how they fit
One layer at a time. None of these is a silver bullet — together they cover the surface.

how the pieces fit.

You're scoping an AI red team engagement

Start with AATMF. Use the 15 tactics as a coverage checklist. Score everything with AATMF-R. Map the report to whatever standard the customer cares about.

You're attacking the humans, not the model

SEF. AATMF will tell you the attack surface; SEF will tell you which lever to pull. They're designed to be used together.

You need a prompt that actually works

P.R.O.M.P.T. Compose, don't list. Six stages: premise, role, output, modulation, persona, tactics. The grammar of every prompt that beats alignment.

You want to automate the whole loop

AATMF Toolkit. Python CLI. Drop in your endpoint, pick procedures, get an AATMF-R-scored report.

You hit a wall and don't know which layer caught you

LLM Red Teamer's Playbook. Five-layer decision tree: input filters → alignment → identity → output → agentic. Tells you where to pivot.

You're building an agent harness for offensive work

Claude-Red. 38 skills. Drop-in offensive capabilities — no need to re-derive every primitive from scratch.

frameworks
AATMF v3.1
Adversarial AI Threat Modeling →
15 tactics, 240+ techniques, 2,150+ procedures. Mapped to NIST AI RMF and MITRE ATLAS.
SEF
Social Engineering Framework →
Seven phases, eight psychological levers, applied to humans and AI.
P.R.O.M.P.T
Compositional Red-Team Grammar →
Modular grammar for direct, indirect, multi-turn, and agentic prompt injection.
Claude-Red
Offensive Skills Library →
38 SKILL.md files: SQLi, shellcode, EDR evasion, exploit dev.
AATMF Toolkit
Python CLI for LLM Safety Testing →
Three-layer eval pipeline, defense fingerprinting, decay tracking.
Playbook
LLM Red Teamer's Playbook →
Diagnostic methodology for bypassing LLM defense layers.