CTF-style GenAI security training, focusing on real-world LLM attacks/use-cases, securing public & private AI services. Learn to build custom models for specific security challenges, Pen-testing GenAI apps and implementing guardrails for security & monitoring of enterprise AI services.
Learn the intricacies of GenAI and LLM security through this training program that blends CTF styled practical Pen-test exercises, designed for security professionals. This course provides hands-on experience in addressing real-world LLM threats and constructing defense mechanisms, encompassing threat identification, neutralization, and the deployment of LLM agents to tackle enterprise security challenges. By the end of this training, you will be able to:
- Identify and mitigate GenAI vulnerabilities using adversary simulation, OWASP and MITRE Atlas frameworks, and apply AI security and ethical principles in real-world scenarios.
- Execute and defend against advanced adversarial attacks, including prompt injection and data poisoning, utilizing CTF-style exercises and real-world LLM use-cases.
- Build an LLM firewall, leveraging custom models to protect against adversarial attacks and secure enterprise AI services.
- Develop and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection and benchmarking models for security.
- Implement RAG to train custom LLM agents and solve specific security challenges, such as compliance automation, cloud policy generation & Security Operations Copilot.
- Establish a comprehensive LLM SecOps process, to secure the GenAI supply chain against adversarial attacks.
Learning Objectives:
* Please note: This is not included in the Main Conference registration and requires a separate registration.