Thursday
Room 5
10:20 - 11:20
(UTC+01)
Workshop (60 min)
Part 1/2: Securing your AI code generation workflow
AI code generation promises dramatic productivity gains but also introduces new classes of risk that most teams are still unprepared for. Coding models are trained on public repositories that mix secure patterns with vulnerable or outdated ones, sometimes amplifying bad practices instead of improving code security. This leaves development teams trying to balance productivity with manual review and rethink how they do testing and security reviews in workflows where AI is writing a significant portion of the code.
In this workshop, we will walk through approaches to secure AI code-generation workflows end to end: from writing safer prompts to building “agentic” pipelines that continuously review and harden AI-produced code. We will look at bad prompt examples that can lead to vulnerable implementations. We will then explain how to embed security guidelines, MCP-based prompt files, and stack-specific templates so the AI defaults to secure patterns instead of reinventing them.
Attendees will leave with actionable patterns, reusable templates, and concrete workflows they can plug into their own AI code generation toolchains.
