black flat screen computer monitor
| |

Claude vulnerabilities allow attackers to execute unauthorized commands by leveraging the system itself.

Two high-severity vulnerabilities in Anthropic’s Claude Code could enable attackers to bypass restrictions and execute unauthorised commands. Notably, Claude itself inadvertently assisted in the creation of exploits targeting its own security mechanisms. The vulnerabilities, identified by Elad Beber from Cymulate, are CVE-2025-54794 and CVE-2025-54795. They illustrate how the analytical capabilities of AI systems can be manipulated against their own security controls through meticulous prompt crafting. Claude Code functions as Anthropic’s AI-powered coding assistant, designed to aid developers in writing and executing code via natural language. Its security framework relies on two main defences: Current Working Directory (CWD) restrictions that sandbox file operations and command whitelisting that permits only pre-approved operations such as ls, cat, and echo.

CVE-2025-54794, known as Path Restriction Bypass, exploits a simplistic prefix-based path validation in Claude Code’s directory containment system. The system checks if a requested path begins with an approved directory prefix, allowing attackers to create directories with similar prefixes to bypass validation. For instance, if the working directory is /tmp/allowed_dir, an attacker could create /tmp/allowed_dir_malicious, which would pass the validation check. This vulnerability grants unauthorised access to files outside the intended sandbox. When combined with symbolic links, it could lead to access to critical system files, potentially resulting in privilege escalation in environments where Claude Code operates with elevated privileges. CVE-2025-54795, or Command Injection, permits arbitrary command execution due to inadequate input sanitisation of whitelisted commands. Attackers can inject malicious commands while disguising them as legitimate operations. Beber demonstrated this by using the echo command to execute unauthorised applications, revealing a significant AI security challenge: these systems can be directed to identify and exploit their own weaknesses through social engineering and prompt manipulation. 

Similar Posts

Leave a Reply