Risk management, regulatory mapping, training programs.
| Role | Responsibilities |
|---|---|
| CISO / AI Security Lead | Overall accountability, risk acceptance decisions, board reporting |
| AI Red Team Lead | Assessment planning, technique development, findings review |
| ML Engineering Lead | Model security, training pipeline integrity, deployment hardening |
| Data Governance | Training data provenance, RAG source quality, data poisoning detection |
| Legal / Compliance | Regulatory mapping, incident notification, liability assessment |
| Product Security | Integration security, API hardening, agent permission design |
| Risk Level | Treatment Options |
|---|---|
| 🔴 CRITICAL | Must mitigate. No acceptance without CISO sign-off and compensating controls. |
| 🟠 HIGH | Mitigate within sprint. Risk acceptance requires documented justification. |
| 🟡 MEDIUM | Schedule remediation. May accept with monitoring. |
| 🔵 LOW | Accept with documentation. Monitor for escalation. |
| ⚪ INFO | Document. No action required. |
| Milestone | Date | AATMF Relevance |
|---|---|---|
| Prohibited practices effective | February 2, 2025 | T8 (social scoring, manipulation), T15 (biometric categorization) |
| GPAI obligations | August 2, 2025 | T6 (training data), T13 (supply chain transparency) |
| Full high-risk requirements | August 2, 2026 | All tactics — conformity assessment requires threat modeling |
| Maximum fine | €35M or 7% global turnover |
| EU AI Act Requirement | AATMF Mapping |
|---|---|
| Risk management system (Art. 9) | AATMF-R v3 scoring, Parts 24–25 |
| Data governance (Art. 10) | T6, T12 detection and mitigation |
| Technical documentation (Art. 11) | Full framework documentation |
| Transparency (Art. 13) | T7, T8 output validation |
| Human oversight (Art. 14) | T15 human workflow controls |
| Robustness (Art. 15) | T1–T5 resilience testing |
| Post-market monitoring (Art. 72) | Detection engineering (Part 19) |
| OWASP | Description | AATMF Primary | AATMF Secondary |
|---|---|---|---|
| LLM01 | Prompt Injection | T1, T2 | T3, T9 |
| LLM02 | Sensitive Information Disclosure | T10 | T7 |
| LLM03 | Supply Chain Vulnerabilities | T13 | T14 |
| LLM04 | Data and Model Poisoning | T6 | T12 |
| LLM05 | Improper Output Handling | T7 | T8 |
| LLM06 | Excessive Agency | T11 | T5 |
| LLM07 | System Prompt Leakage | T1 | T4 |
| LLM08 | Vector and Embedding Weaknesses | T12 | T10 |
| LLM09 | Misinformation | T8 | T15 |
| LLM10 | Unbounded Consumption | T14 | T5 |
ATLAS v4.6.0 added 14 new agentic AI techniques, bringing the total to 15 tactics, 66 techniques, and 46 sub-techniques. AATMF v3 provides finer-grained coverage:
| Comparison | MITRE ATLAS | AATMF v3 |
|---|---|---|
| Tactics | 15 | 15 |
| Techniques | 66 | 240 |
| Sub-techniques | 46 | — |
| Attack procedures | — | 2,152+ |
| Prompts | — | 4,980+ |
| Risk scoring | No | Yes (AATMF-R v3) |
AATMF is designed to be complementary to ATLAS, not competitive. ATLAS provides breadth across the ML lifecycle; AATMF provides depth on adversarial attack techniques with executable procedures.
The preliminary draft (December 2025) establishes control overlays for AI systems. AATMF maps to NIST functions:
| NIST Function | AATMF Coverage |
|---|---|
| GOVERN | Volume VI (Parts 24–26) |
| MAP | Part 3 (Architecture), Part 24 (Risk Management) |
| MEASURE | Part 2 (AATMF-R v3), Part 19 (Detection) |
| MANAGE | Parts 20–23 (Mitigation, IR, Red/Blue Team) |
| Audience | Content Focus | Duration | Frequency |
|---|---|---|---|
| Executive leadership | AI risk landscape, AATMF overview, regulatory exposure | 2 hours | Quarterly |
| ML engineers | T1–T6 techniques, secure training, model hardening | 2 days | Semi-annual |
| Application developers | T1–T5, T11 (agentic), API security, prompt injection defense | 1 day | Semi-annual |
| Security operations | Detection engineering, IR procedures, all tactics overview | 2 days | Semi-annual |
| Data scientists | T6 (training poisoning), T12 (RAG), data provenance | 1 day | Annual |
| Product managers | Risk assessment, compliance requirements, threat landscape | 4 hours | Annual |
| All staff | AI security awareness, social engineering with AI, Shadow AI risks | 1 hour | Annual |
A developer reports that their AI coding assistant has been making unexpected network calls. Investigation reveals that a compromised MCP server has been redirecting the agent to exfiltrate source code. The attack has been active for approximately 72 hours.
Discussion points: Detection gap analysis, containment procedures for agentic systems, MCP audit process, developer notification.
Customer support reports that the AI assistant is providing incorrect information about product pricing and warranty terms. Analysis shows that 5 malicious documents were injected into the RAG knowledge base 2 weeks ago, affecting approximately 15% of queries.
Discussion points: RAG integrity monitoring, customer notification, knowledge base rebuild, source authentication.
A widely-used LoRA adapter on HuggingFace has been updated with a backdoor. Your team deployed this adapter 3 days ago in a fine-tuned model serving 50,000 daily users.
Discussion points: Model artifact verification, rollback procedures, user impact assessment, responsible disclosure.
Security monitoring detects a 500% increase in safety filter bypasses. Investigation reveals a new jailbreak technique (formatted as XML policy files) that bypasses all current input classifiers. The technique has been publicly shared on social media.
Discussion points: Emergency filter updates, temporary service restrictions, public communication, patch timeline.
A board member received a video call from the "CFO" requesting approval for a $5M wire transfer. The call lasted 15 minutes and included realistic video and audio. The board member approved the transfer before verification.
Discussion points: Multi-factor verification for financial decisions, deepfake detection capabilities, insurance coverage, incident response.