A computer screen with a picture of a dragon on it
| |

Malicious individuals are leveraging artificial intelligence to enhance their operations, speed up their attacks, and target autonomous AI systems.

The cybersecurity landscape has undergone a significant transformation as threat actors increasingly weaponise artificial intelligence to enhance their attack capabilities and target the AI systems that organisations rely on. According to the CrowdStrike 2025 Threat Hunting Report, adversaries have moved beyond using AI as a supplementary tool, integrating generative AI technologies into every phase of their operations, from reconnaissance to payload deployment. This shift marks a fundamental change in cyber warfare, where traditional attack methods are supercharged by machine learning algorithms and automated decision-making processes. The rise of AI-powered threat campaigns has enabled less skilled adversaries to execute sophisticated attacks that once required advanced technical expertise. Notable examples include the Funklocker and SparkCat malware families, which showcase GenAI-built malware designed to evade traditional detection through dynamically generated code and polymorphic behaviours.

CrowdStrike analysts have identified a concerning trend involving the DPRK-nexus adversary FAMOUS CHOLLIMA, which infiltrated over 320 companies in the past year, reflecting a staggering 220% year-over-year increase. This threat actor employs generative AI throughout the hiring process, using real-time deepfake technology to disguise identities during video interviews and AI tools to perform job functions while maintaining covert access to organisational systems. The most sophisticated aspect of these AI-powered campaigns is their ability to establish persistent access through enhanced social engineering techniques. The group SCATTERED SPIDER exemplifies this by combining vishing attacks with help desk impersonation, utilising AI-generated scripts to accurately provide employee identification numbers and answer verification questions. Their operators leverage machine learning algorithms to analyse publicly available information, constructing convincing personas that can bypass multifactor authentication systems and gain access to SaaS environments, often achieving full network encryption within 24 hours of initial compromise. 

Similar Posts

Leave a Reply