Gabriel Bernadett-Shapiro

Demystifying LLMs: Power Plays in Security Automation

As the popularity of Large Language Models (LLMs) continues to grow, there’s a clear divide in perception: some believe LLMs are the solution to everything – a ruthlessly efficient automaton that will take your job and steal your dance partner. Others remain deeply skeptical of their potential – and have strictly forbidden their use in corporate environments. This presentation seeks to bridge that divide, offering a framework to better understand and incorporate LLMs into the realm of security work. We will delve into the most pertinent capabilities of LLMs for defensive use cases, shedding light on their strengths (and weaknesses) in summarization, data labeling, and decision task automation. Our discourse will also address specific tactics with concrete examples such as ‘direction following’—guiding LLMs to adopt the desired perspective—and the ‘few-shot approach,’ emphasizing the importance of precise prompting to maximize model efficiency. The presentation will also outline the steps to automate tasks and improve analytical processes and provide attendees with access to basic scripts which they can customize and test according to their specific requirements.

Gabriel Bernadett-Shapiro is a cybersecurity leader with extensive experience in threat intelligence and incident response. He was most recently at OpenAI, working on the forefront of AI safety’s explosion, transforming investigations of emerging threats into capability evaluations for the GPT-4 model card. He also fostered collaboration between AI safety researchers and the information security community through the Cyber Grant Program. Prior to OpenAI Gabriel was a senior analyst on the Apple Information Security Threat Intelligence team.

Originally from the Bay Area, he holds a Masters of Arts in Public Diplomacy from USC and a BA in International Relations from Occidental College.

gabe_shapiro-modified