AATMF v3.1 · Volume VI

VI.governance.

Risk management, regulatory mapping, training programs.

Part 24: Risk Management FrameworkPart 25: Compliance and Standards MappingPart 26: Training and Awareness Programs
24-risk-management

Part 24: Risk Management Framework

AI Risk Governance Structure

Role Responsibilities
CISO / AI Security Lead Overall accountability, risk acceptance decisions, board reporting
AI Red Team Lead Assessment planning, technique development, findings review
ML Engineering Lead Model security, training pipeline integrity, deployment hardening
Data Governance Training data provenance, RAG source quality, data poisoning detection
Legal / Compliance Regulatory mapping, incident notification, liability assessment
Product Security Integration security, API hardening, agent permission design

Risk Assessment Process

  1. Asset Inventory — Catalog all AI models, agents, RAG systems, training pipelines, and inference infrastructure
  2. Threat Modeling — Map assets to applicable AATMF tactics using the Architecture overview
  3. Technique Assessment — For each applicable technique, score using AATMF-R v3
  4. Control Evaluation — Document existing mitigations, identify gaps
  5. Risk Calculation — Aggregate technique scores to tactic-level and system-level risk
  6. Treatment — Accept, mitigate, transfer, or avoid each identified risk
  7. Continuous Monitoring — Deploy detection engineering (Part 19), schedule periodic reassessment

Risk Treatment Decision Framework

Risk Level Treatment Options
🔴 CRITICAL Must mitigate. No acceptance without CISO sign-off and compensating controls.
🟠 HIGH Mitigate within sprint. Risk acceptance requires documented justification.
🟡 MEDIUM Schedule remediation. May accept with monitoring.
🔵 LOW Accept with documentation. Monitor for escalation.
⚪ INFO Document. No action required.

← Volume VI · Home · Part 25: Compliance →

25-compliance-mapping

Part 25: Compliance and Standards Mapping

EU AI Act

Milestone Date AATMF Relevance
Prohibited practices effective February 2, 2025 T8 (social scoring, manipulation), T15 (biometric categorization)
GPAI obligations August 2, 2025 T6 (training data), T13 (supply chain transparency)
Full high-risk requirements August 2, 2026 All tactics — conformity assessment requires threat modeling
Maximum fine €35M or 7% global turnover

AATMF Coverage for EU AI Act Compliance

EU AI Act Requirement AATMF Mapping
Risk management system (Art. 9) AATMF-R v3 scoring, Parts 24–25
Data governance (Art. 10) T6, T12 detection and mitigation
Technical documentation (Art. 11) Full framework documentation
Transparency (Art. 13) T7, T8 output validation
Human oversight (Art. 14) T15 human workflow controls
Robustness (Art. 15) T1–T5 resilience testing
Post-market monitoring (Art. 72) Detection engineering (Part 19)

OWASP LLM Top 10 2025

OWASP Description AATMF Primary AATMF Secondary
LLM01 Prompt Injection T1, T2 T3, T9
LLM02 Sensitive Information Disclosure T10 T7
LLM03 Supply Chain Vulnerabilities T13 T14
LLM04 Data and Model Poisoning T6 T12
LLM05 Improper Output Handling T7 T8
LLM06 Excessive Agency T11 T5
LLM07 System Prompt Leakage T1 T4
LLM08 Vector and Embedding Weaknesses T12 T10
LLM09 Misinformation T8 T15
LLM10 Unbounded Consumption T14 T5

MITRE ATLAS v4.6.0 (October 2025)

ATLAS v4.6.0 added 14 new agentic AI techniques, bringing the total to 15 tactics, 66 techniques, and 46 sub-techniques. AATMF v3 provides finer-grained coverage:

Comparison MITRE ATLAS AATMF v3
Tactics 15 15
Techniques 66 240
Sub-techniques 46
Attack procedures 2,152+
Prompts 4,980+
Risk scoring No Yes (AATMF-R v3)

AATMF is designed to be complementary to ATLAS, not competitive. ATLAS provides breadth across the ML lifecycle; AATMF provides depth on adversarial attack techniques with executable procedures.

NIST AI RMF / Cyber AI Profile (IR 8596)

The preliminary draft (December 2025) establishes control overlays for AI systems. AATMF maps to NIST functions:

NIST Function AATMF Coverage
GOVERN Volume VI (Parts 24–26)
MAP Part 3 (Architecture), Part 24 (Risk Management)
MEASURE Part 2 (AATMF-R v3), Part 19 (Detection)
MANAGE Parts 20–23 (Mitigation, IR, Red/Blue Team)

← Part 24 · Home · Part 26: Training →

26-training

Part 26: Training and Awareness Programs

Role-Based Training Matrix

Audience Content Focus Duration Frequency
Executive leadership AI risk landscape, AATMF overview, regulatory exposure 2 hours Quarterly
ML engineers T1–T6 techniques, secure training, model hardening 2 days Semi-annual
Application developers T1–T5, T11 (agentic), API security, prompt injection defense 1 day Semi-annual
Security operations Detection engineering, IR procedures, all tactics overview 2 days Semi-annual
Data scientists T6 (training poisoning), T12 (RAG), data provenance 1 day Annual
Product managers Risk assessment, compliance requirements, threat landscape 4 hours Annual
All staff AI security awareness, social engineering with AI, Shadow AI risks 1 hour Annual

Tabletop Exercise Scenarios

Scenario 1: GTG-1002 Redux (Agentic Exploitation)

A developer reports that their AI coding assistant has been making unexpected network calls. Investigation reveals that a compromised MCP server has been redirecting the agent to exfiltrate source code. The attack has been active for approximately 72 hours.

Discussion points: Detection gap analysis, containment procedures for agentic systems, MCP audit process, developer notification.

Scenario 2: PoisonedRAG (Knowledge Base Manipulation)

Customer support reports that the AI assistant is providing incorrect information about product pricing and warranty terms. Analysis shows that 5 malicious documents were injected into the RAG knowledge base 2 weeks ago, affecting approximately 15% of queries.

Discussion points: RAG integrity monitoring, customer notification, knowledge base rebuild, source authentication.

Scenario 3: Supply Chain Compromise

A widely-used LoRA adapter on HuggingFace has been updated with a backdoor. Your team deployed this adapter 3 days ago in a fine-tuned model serving 50,000 daily users.

Discussion points: Model artifact verification, rollback procedures, user impact assessment, responsible disclosure.

Scenario 4: Policy Puppetry at Scale

Security monitoring detects a 500% increase in safety filter bypasses. Investigation reveals a new jailbreak technique (formatted as XML policy files) that bypasses all current input classifiers. The technique has been publicly shared on social media.

Discussion points: Emergency filter updates, temporary service restrictions, public communication, patch timeline.

Scenario 5: Deepfake Board Member

A board member received a video call from the "CFO" requesting approval for a $5M wire transfer. The call lasted 15 minutes and included realistic video and audio. The board member approved the transfer before verification.

Discussion points: Multi-factor verification for financial decisions, deepfake detection capabilities, insurance coverage, incident response.


← Part 25 · Home · Volume VII →

Vol I →
Foundations
Introduction, risk-assessment methodology, and architecture for adversarial AI threat mode…
Vol II →
Core Tactics (T01–T08)
The eight foundational adversarial-AI tactics: prompt subversion, semantic evasion, reason…
Vol III →
Advanced Tactics (T09–T12)
Multimodal attacks, integrity breach, agentic exploitation, RAG-specific threats — for sys…
Vol IV →
Infrastructure & Human (T13–T15)
Where the attack surface meets the surrounding stack: supply chain, infrastructure, and th…
Vol V →
Operations
Detection engineering, mitigation, incident response, red-team ops, blue-team defense — ap…
Vol VII →
Appendices
Attack catalog, signatures, tools, templates, case studies, glossary — operational referen…
Author
Kai Aizen
Independent offensive security researcher. 23 published CVEs, 5 Linux kernel mainline patches, creator of AATMF / P.R.O.M.P.T / SEF, author of Adversarial Minds.