The Google Threat Intelligence Group (GTIG) has identified a significant shift where adversaries are now deploying novel AI-enabled malware in active operations, moving beyond simple productivity gains observed in 2024. This new operational phase includes "Just-in-Time" AI malware, such as PROMPTFLUX and PROMPTSTEAL, that utilize Large Language Models (LLMs) during execution to dynamically obfuscate code, regenerate themselves, or generate malicious commands, representing a significant step toward more autonomous and adaptive malware. Furthermore, state-sponsored actors are using social engineering pretexts—like posing as students or "capture-the-flag" participants—to persuade AI systems like Gemini to bypass safety guardrails, even as Google disrupts accounts and strengthens its models and the Secure AI Framework (SAIF).
Sponsors:
No comments yet. Be the first to say something!