Program

/

CODE BLUE 2025

Trainings

AI Agent Security Masterclass

As AI-powered agents transition from novelties to essential co-pilots in software development and security operations, mastering their architecture and securing their behavior becomes mission-critical. This masterclass offers a comprehensive, practical deep-dive into attacking and defending AI agents, specifically tailored for Application Security and DevSecOps professionals tasked with integrating AI into their workflows safely and effectively. This two-day, hands-on training focuses on empowering you to build and secure AI agentic systems.

Training Outline

  • Title

    AI Agent Security Masterclass

  • Trainer

    Abhay Bhargav

  • Language

    English

  • Date

    2025-11-16 9:00 - 18:30
    2025-11-17 9:00 - 18:30

  • Venue

    Bellesalle Shinjuku Minamiguchi 4F Room5

  • Capacity

    24(*Minimum students count is 8)

  • Remarks

    • Include 2day Conference ticket(November 18th to 19th, 2025) for training attendee

Training Application

Buying Ticket
Ticket Standard
Price 280,000 JPY (Inc.TAX)
Sales period 〜November 11th
Sales Status

Training Detail

Who should take this course
  • Those who want to learn about cutting-edge AI agent security
Student requirements
  • Foundational understanding of application security principles .
  • Familiarity with threat modeling concepts, common vulnerability types (e.g., OWASP Top 10 for Web), and security testing (SAST/DAST/SCA) is beneficial.
  • Basic knowledge of Python programming or scripting is recommended as labs involve reading/writing simple Python code for AI API/framework interaction.
  • No prior machine learning or deep AI expertise is required. Core AI/LLM concepts relevant to agents will be introduced.
  • An eagerness to experiment, a builder’s mindset, and an interest in both offensive and defensive security are key.

What skills will participants learn at your training?

  • Hands-On AI Agent Engineering & Defense: Gain practical, build-centric experience in creating AI agents integrated with security tools (via MCP and frameworks), and critically, securing those agents with sandboxing, robust permission controls, prompt hardening, and diligent auditing. You’ll leave capable of architecting AI-driven workflows that are both powerful and resilient.
  • Mastering AI System Threat Modeling: Develop methodologies to comprehensively threat model and risk-assess AI agent workflows. You’ll learn to pinpoint unique threat vectors in agentic systems—from sophisticated prompt injections and RAG data poisoning to malicious plugin exploits—and design effective, architecturally-sound mitigations and controls.
  • Developing Offensive AI Security Skills: Cultivate an attacker’s mindset towards AI agents. Through immersive red-team style labs, you will understand precisely how adversaries exploit weaknesses such as excessive agency, tool misuse, and unchecked autonomous decision loops. This offensive insight directly informs your ability to implement proactive, robust defenses and respond effectively to AI-specific security incidents.
  • MCP and Secure Tool Orchestration Expertise: Deeply understand the Model Context Protocol (MCP) and its pivotal role in standardizing AI tool usage. You will learn and apply best practices for secure tool integration—including verifying tool integrity, preventing cross-tool interference, and managing plugin supply chains—enabling you to extend AI capabilities in a controlled, secure manner within your DevSecOps pipelines.

What students should bring
A laptop with a modern web browser and reliable internet connectivity. All participants will receive access to a cloud-based lab environment with all required tools, LLMs, and agent frameworks. No special hardware or local software installations are needed.

What students will be provided with
Participants will receive: Access to course materials (slides, notes). Code samples and lab exercises. Extended access to the cloud-based lab environment for a period after the course to continue practice and experimentation.

アブハイ・バルガフ の写真

Abhay Bhargav

アブハイ・バルガフ

Abhay Bhargav is the Founder of the Chief Research Officer of AppSecEngineer, an elite, hands-on online training platform and we45 a specialized AppSec Company.
Abhay started his career as a breaker of apps, in pentesting and red-teaming, but today is more involved in scaling AppSec with Cloud-Native Security and DevSecOps
He has created some pioneering works in the area of DevSecOps and AppSec Automation, including the world’s first hands-on training program on DevSecOps, focused on Application Security Automation. Abhay is also committed to Open-Source and has developed the first-ever Threat Modeling solution at the crossroads of Agile and DevSecOps, called ThreatPlaybook.
Abhay is a speaker and trainer at major industry events including DEF CON, BlackHat, OWASP AppSecUSA, EU and AppSecCali. His training programs have been sold-out events at conferences like AppSecUSA, EU, AppSecDay Melbourne, CodeBlue (Japan), BlackHat and so on. He's authored two international publications on Java Security and PCI Compliance as well.