September 21
Attackers continue to evolve their tradecraft to successfully evade EDR preventions and SIEM detections. Defenders are continually trying to build high quality detections and prevention rules, but often times lack the ability to validate that the detections and prevention rules are working. The Adaptive Threat Simulation and Detection Engineering workshop will walk students through the process of creating attack playbooks and campaigns, how to build high quality detections, and how to validate the detections will detect the attacks. Students will have the opportunity to interact with a live lab environment for attack simulation and detection engineering.
Learning Objectives:
- Learn how attackers create their attacks and how to simulate this in a lab environment.
- Learn how to build an attack playbook and campaign to simulate attacker behavior.
- Learn how to build a high quality detection and validate the detection against the playbooks and campaigns.
Join us for this workshop on the NIST Cybersecurity Framework (CSF) 2.0, where we'll not only explore the latest updates but also dive deep into practical application of the CSF within your organization.
In this workshop we’ll use interactive examples that showcase the CSF's flexibility for businesses of all types. We’ll dig into each Function highlighting key Categories and their significance in managing cybersecurity risks. Real-world examples will provide you with actionable insights on integrating the CSF into your operations, enhancing your cybersecurity posture and resilience. Don't miss this opportunity to elevate your cybersecurity strategy with CSF 2.0!
CTF-style GenAI security training, focusing on real-world LLM attacks/use-cases, securing public & private AI services. Learn to build custom models for specific security challenges, Pen-testing GenAI apps and implementing guardrails for security & monitoring of enterprise AI services.
Learn the intricacies of GenAI and LLM security through this training program that blends CTF styled practical Pen-test exercises, designed for security professionals. This course provides hands-on experience in addressing real-world LLM threats and constructing defense mechanisms, encompassing threat identification, neutralization, and the deployment of LLM agents to tackle enterprise security challenges. By the end of this training, you will be able to:
- Identify and mitigate GenAI vulnerabilities using adversary simulation, OWASP and MITRE Atlas frameworks, and apply AI security and ethical principles in real-world scenarios.
- Execute and defend against advanced adversarial attacks, including prompt injection and data poisoning, utilizing CTF-style exercises and real-world LLM use-cases.
- Build an LLM firewall, leveraging custom models to protect against adversarial attacks and secure enterprise AI services.
- Develop and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection and benchmarking models for security.
- Implement RAG to train custom LLM agents and solve specific security challenges, such as compliance automation, cloud policy generation & Security Operations Copilot.
- Establish a comprehensive LLM SecOps process, to secure the GenAI supply chain against adversarial attacks.
Learning Objectives:
- Proficiency in identifying and mitigating GenAI vulnerabilities, applying security and ethical principles to real-world scenarios, and combating advanced adversarial attacks including prompt injection and data poisoning.
- Skills to build and deploy enterprise-grade LLM defenses, including custom guardrails and models for input/output protection, alongside practical experience in securing AI services against adversarial threats.
- The ability to develop custom LLM agents for specific security challenges, such as compliance automation and cloud policy generation, and establish a comprehensive SecOps process to enhance GenAI supply chain security.