Anthropic AI used for cybercrime
A new report from Anthropic reveals that criminals are increasingly using AI to manage various aspects of their operations. The findings indicate that AI is now integrated throughout the entire attack cycle, encompassing reconnaissance, malware development, fraud, and extortion. This report is based on actual cases where Anthropic’s models were misused, providing a unique perspective on how attackers are evolving and incorporating AI into every phase of their activities. While the focus is primarily on Anthropic’s own model, the cases illustrate a wider trend applicable to advanced AI systems in general.
One of the most notable insights is how criminals are treating AI systems as active operators within their campaigns. In a case referred to as “Vibe Hacking,” a single attacker employed an AI coding agent to execute a large-scale data extortion campaign targeting at least 17 organisations within a month. These targets included hospitals, government agencies, and emergency services. Rather than using AI solely for advice, the attacker provided operational instructions and relied on the model to make tactical and strategic decisions. The AI was capable of scanning networks, harvesting credentials, creating undetectable malware, and analysing stolen data to determine appropriate ransom amounts. This demonstrates that AI can bridge the gap between knowledge and execution, allowing a single individual to perform tasks that previously required a skilled team.
Criminals are now embedding AI into every stage of their operations. Anthropic documented instances of attackers utilising AI for reconnaissance, privilege escalation, malware obfuscation, data theft, and ransom negotiations. One Chinese group was observed leveraging AI across nearly all MITRE ATT&CK tactics during an extensive campaign against Vietnamese critical infrastructure. The AI model functioned as a code developer, security analyst, and operational consultant, enabling attackers to swiftly generate new exploits, automate scanning and data analysis, and devise lateral movement strategies. This extensive use of AI presents two significant challenges for defenders. First, attacks can occur at a much faster pace, as AI eliminates manual bottlenecks. Second, AI-driven operations can quickly adapt to defensive measures, undermining traditional assumptions that complex attacks necessitate advanced operator skills. Consequently, a single individual with average skills can now orchestrate campaigns that resemble the efforts of a well-funded team.
Beyond technical intrusions, the report underscores how AI is revolutionising fraud. Criminals are employing AI models to analyse stolen data, create victim profiles, and operate fraudulent services. Anthropic identified cases where AI powered carding platforms and romance scams, showcasing the extensive capabilities of AI in facilitating various forms of criminal activity. This transformation highlights the urgent need for enhanced defensive strategies to counteract the evolving threat landscape driven by AI technologies.