Thursday
Room 4
10:20 - 11:20
(UTC+01)
Talk (60 min)
Prompt Injection Attacks in LLM-Powered Applications
Large Language Models (LLMs) are rapidly being embedded into production applications as chatbots, copilots, and autonomous agents. While these systems feel fundamentally new, the security failures they introduce are strikingly familiar.
Prompt injection is best understood not as an exotic AI risk, but as the logical evolution of classic injection vulnerabilities, in which untrusted input can alter system behaviour in unintended ways.
We begin by breaking down LLM architecture and the anatomy of an in-application chat feature, highlighting where prompts, retrieved data (RAG), and tools are combined into a single execution context. From there, we explore practical prompt injection techniques observed in real-world testing, including direct elicitation, role-play and distraction attacks, formatting-based evasion, token smuggling, and indirect prompt injection via poisoned documents and external data sources. These techniques demonstrate how attackers can leak system prompts, bypass safeguards, exfiltrate sensitive data, and abuse excessive model agency to trigger high-impact actions.
The presentation emphasizes that the real risk is not what a model can say, but what it is allowed to access and do. We conclude with a defender-focused methodology rooted in familiar AppSec principles: attack surface enumeration, least privilege, explicit permissions, separation of duties, and defense in depth. By reframing prompt injection as an injection class vulnerability within a broader AI ecosystem, rather than a purely “AI problem”, security teams can apply proven strategies to secure LLM-enabled systems before attackers do
