Original research on LLM jailbreaking, prompt injection, and adversarial AI.
Research on context manipulation, multi-turn attacks, memory poisoning, and novel jailbreak techniques.
Indirect injection, custom instruction backdoors, MCP vulnerabilities, and defense analysis.