● HANDS-ON AI AppSec WORKSHOP
Move beyond basic prompt injection. Learn to identify, exploit, and mitigate critical vulnerabilities across the entire AI application lifecycle, from simple wrappers, RAG systems and autonomous agents to insecure output handling in an intensive 2-day hands-on AI security workshop.
This training is delivered through custom-built labs utilizing a dedicated lab environment to support intensive, hands-on exploitation of AI-integrated applications.
Across two intensive 5-hour sessions, you will adopt the attacker’s mindset to dismantle AI-driven architectures. You will work hands-on with real-world scenarios, exploiting Large Language Models (LLMs), bypassing safety filters, and hijacking autonomous agents to understand how to build truly resilient AI implementations.
● WHAT YOU’LL DO
Master advanced prompt injection (Direct & Indirect) and jailbreaking techniques to bypass model alignment and safety guards.
Exploit Retrieval Augmented Generation (RAG) systems by poisoning data sources and manipulating the context injected into the LLM.
Target autonomous agents and tool-calling interfaces to execute unauthorized actions and achieve remote code execution (RCE).
Demonstrates how unvalidated LLM outputs lead to traditional web vulnerabilities like XSS, CSRF, and SSRF in an AI context.
Analyze the risks of third-party model dependencies, insecure plugins, and the broader AI software supply chain.
Apply the OWASP Top 10 for LLMs framework to implement robust input/output filtering and secure architecture patterns.
● AGENDA
Attacking the Model & Data Layers
→ Introduction to AI AppSec & the LLM Threat Landscape
→ Direct vs. Indirect Prompt Injection
→ Advanced Jailbreaking & Bypassing Safety Filters
→ Training Data & RAG Context Poisoning
→ Lab: Breaking Model Alignment & Exploiting Insecure RAG
Exploiting Integrations & Autonomous Agents
→ Hijacking Autonomous Agents & Tool-Calling
→ Insecure Output Handling: AI-driven XSS and SSRF
→ Supply Chain Vulnerabilities & Insecure Plugins
→ OWASP Top 10 for LLMs: Defense-in-Depth for AI
→ Final Lab: Chaining AI exploits for full system compromise
● INSTRUCTORS

Veteran security leader and Founder of Grafos AI, the safe and secure AI agent platform that automates Infrastructure as Code (IaC) editing for developers. A leading voice in Agentic AI Security and a key contributor to the OWASP AI Exchange, Spyros specializes in deploying autonomous systems that enhance developer velocity without compromising architectural integrity. With nearly two decades of experience ranging from hands-on engineering to the CISO’s office, he is a frequent speaker at global stages like DEF CON and QCon, where he translates complex security theory into pragmatic, automated workflows.
● WHAT YOU’LL GAIN
→ Exploitation Mastery: Ability to perform complex multi-stage attacks on LLM-powered applications.
→ Architectural Insight: Deep understanding of how RAG and Agents introduce new attack vectors.
→ Strategic Defense: Practical skills to mitigate most attacks and build defence in depth auto-recovery and notification.
→ Adversarial Mindset: Expertise in red teaming stochastic systems to identify “unforeseeable” failure modes.
● REQUIREMENTS & PREREQUISITES
→ Technical Proficiency: Basic understanding of Python and Web AppSec (OWASP Top 10) fundamentals.
→ Tooling: Familiarity with API interaction tools (e.g., Postman, cURL) and basic command-line usage.
● BOOK YOUR SEAT
Join us for two days of hands-on AI security exploitation.
To provide the best experiences, we and our partners use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us and our partners to process personal data such as browsing behavior or unique IDs on this site and show (non-) personalized ads. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Click below to consent to the above or make granular choices. Your choices will be applied to this site only. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen.
Syllabus:
Intro to GCP
Exploitation of GCP Services
Methodologies
Security Services
Syllabus:
Intro to AWS
Exploitation of AWS Services
Methologies
Common Detection Mechanisms
Syllabus:
Azure Basics
Exploitation of Azure Services
Methologies
Common Detection Mechanisms
Fundamentals and Setup
Advanced Techniques and Practical Application
Advanced Techniques and Practical Application
Fundamentals & Setup