Offensive psychology, social engineering, and adversarial reasoning — the human-layer companion to AATMF and SEF.
Why the same attack works against humans and language models. A field-tested methodology for treating cognition itself as an attack surface — and a defensive framework for the systems built on top of it.
Phishing, jailbreaking, prompt injection, and pretexting are surface variations on a single attack pattern: an adversary supplying inputs that subvert a reasoning system's intended behavior. Once you see the pattern, the techniques become families and the defenses become engineering — not folklore.
Adversarial Minds is the long-form treatment of that thesis. Three years of red-team engagements, jailbreak research, and social-engineering work distilled into a methodology you can apply against humans, language models, or the agentic systems where the two meet.
Reads less like a textbook and more like a methodology — exactly what the field needs.— reader
Treats persuasion as adversarial input. The bridge between the social engineer and the prompt engineer.— reader