Delve into the critical security vulnerabilities of Artificial Intelligence, exploring the dangerous world of prompt injection, leaking, and jailbreaking as highlighted in SANS' Critical AI Security Controls and real-world adversarial misuse of generative AI like Gemini by government-backed actors. Understand how malicious actors attempt to bypass safety controls, extract sensitive information and manipulate LLMs for nefarious purposes, drawing insights from documented cases involving Iranian, PRC, North Korean, and Russian threat actors. Learn about the offensive techniques used and the ongoing challenge of securing AI systems,
Version: 20241125
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.