Alex Delamotte
LLM Malware In the Wild
Large language models (LLMs) are now part of mainstream software‑development workflows, but they have also become a powerful new tool for adversaries. Over the past year we wrote a multi‑provider YARA rule that hunts for hard‑coded OpenAI and Anthropic model credentials inside files uploaded to VirusTotal. The rule triggered on fully‑weaponised binaries and scripts that outsource key stages of the attack chain to commercial AI services.
In this talk we unpack what we found. We will walk through multiple, malware families that embed real API keys and offload tasks such as phishing‑email generation, victim triage, code‑signing bypasses and on‑device payload generation to commercial LLMs . Attendees will learn how LLM‑powered malware changes the defender’s problem space: static signatures fail because the malicious logic is produced only at run‑time; network inspection is harder because calls look identical to legitimate use; and prompt engineering itself becomes an adversarial discipline. We will share statistics on prevalence and reveal common evasion tricks used to skirt provider policy.
The session assumes no prior machine‑learning background; we focus on concrete reversing and detection workflows that analysts can reproduce with open tools. We will release our YARA rules, VirusTotal query templates. Finally, we discuss where the ecosystem is heading and recommend policy changes providers could enforce today to make malicious LLM usage dramatically more expensive.
Alex’s passion for cybersecurity is humbly rooted in the early aughts, when she declared a vendetta against a computer worm. Over the past decade, Alex has worked with blue, purple, and red teams, though her passion is threat research. Alex enjoys researching the intersection of cybercrime and state-sponsored activity, with recent a focus on LLM-enabled and Cloud threats. She has presented at Defcon’s Cloud Village, Hushcon, and SLEUTHCON. In her spare time, she can be found DJing or learning more languages.
