Wednesday
Room 4
16:20 - 17:20
(UTC+01)
Talk (60 min)
Prompt Hardening - Secure Code Generation Using AI
LLMs generate code fast, but too often with vulnerabilities included "free of charge". This talk presents the results of a structured evaluation into how to prompt LLMs to produce safer code by default without writing an essay each time.
In this talk we'll dive into a report done using PromptFoo on secure code evaluation, there might be an existential crisis or two before we move on to how we can automate security into code generated by the LLMs we are increasingly depending on.
You’ll leave this talk with a minimal, repeatable prompt pattern, an evaluation recipe you can copy, a deeper understanding of the significance of priming the AI and some ideas on how to add security by default into your own AI generated code.
