Wednesday 

Room 4 

11:40 - 12:40 

(UTC+01

Talk (60 min)

The Risky Business, of AI Illiteracy

We all know AI is a risk, but how does that risk actually apply to us; how much risk is our company willing to undertake, and how can we build a framework to help executives make wise choices while undertaking these risks. Because we all know they are very willing to undertake these AI risks to be first to market or even to just stay relevant in today's industry.

AI/ML
Architecture

Introduction: Myself and how my experience building vulnerability management programs altered my views on risks.

Seeing the Forest: I discuss the common focuses of security programs, and how headlines can influence reactions and prioritizations. Then introduce some of the largest breaches of the past few years and highlight how AI actually played into those breaches.

Building Blocks: Before I can get to risk modeling I introduce the tools needed to understand your own risks: inventories, topologies, data flow diagrams, and relationships with the teams that own the above. However this is a ‘trick’ and I reveal that building those blocks causes you to threat model without even realizing it which is why I harp on documenting everything you find throughout building those blocks.

Prioritization: To properly prioritize what risks you accept, first you need to understand the feasibility and likelihood of AI being exploited. As well as what impact its use would have on the business: can revenue still be generated, what services go offline, can an attacker pivot and make things worse, how long would it take to recover, ect. Understanding how your business operates, and what's interconnected is key to building your priorities.

Mitigation: It’s not always possible to eliminate all of your risks which is why we mitigate them as best as we can. I explain how mitigation can come from two forms: making a AI harder to exploit, or decreasing the likelihood of a threat taking place.

Acceptance: Security staff cannot accept risks, oftentimes executives insist that we do or insist on mitigations that are unsatisfactory to security teams. When executives are presented with a risk acceptance document outlining the potential impacts and potential better mitigations that can be implemented. That way we can CYA.

AI Inventoring: Just inventorying what AI tools you have isn’t enough, they can be used in so many different ways in different configurations, how do you document how they are used while preventing too much disruption and upset from the executive suite?

Wrap up: recap that there are always new threats, vulnerabilities, and AIs that someone might want to onboard, so don’t get derailed from your priorities.

Sean Juroviesky

Sean Juroviesky is a dedicated security and risk management expert with extensive experience navigating complex environments. Sean excels at developing a comprehensive understanding of intricate systems and crafting strategic roadmaps to revitalize security programs. By identifying high-risk areas and optimizing the use of existing resources, Sean removes barriers between teams to enhance communication and coordination, driving effective security outcomes. Beyond their professional pursuits, Sean finds joy in backpacking through the mountains with their adventurous Australian Shepherd, partner and twins, embracing the serenity of nature and the thrill of exploration.