white robot
| |

Parallel-Poisoned Web Attack presents poisoned web pages to AI web bots

AI agents can be manipulated into executing malicious actions by websites that remain concealed from regular users, as discovered by JFrog AI architect Shaked Zychlinski. This innovative method enables attackers to inject prompts or instructions into these autonomous AI-powered assistants, effectively hijacking their behaviour for nefarious purposes. Indirect prompt-injection poisoning attacks, where harmful instructions are embedded within the same page visible to human visitors, are often undetectable by users but can still be identified by security systems. The newly identified “parallel-poisoned web” attack takes this a step further by serving a distinct version of the page solely to AI agents. Zychlinski noted that since the malicious content is never visible to human users or standard security crawlers, the attack is exceptionally stealthy, exploiting the agent’s fundamental function of ingesting and acting upon web data.

The “parallel poisoned web” attack relies on browser fingerprinting. AI agents that browse the web currently exhibit highly predictable fingerprints based on automation framework signatures, behavioural patterns, and specific network characteristics. This predictability allows web servers to easily identify AI agents as visitors and serve them a cloaked version of the website. This cloaked version may appear identical to the benign one but contains hidden adversarial prompts designed to hijack the agent. It may also present a completely different version of the page, such as one requiring authentication through an environment variable or another secret key accessible to the agent on the user’s machine. The malicious prompts embedded in the cloaked website can instruct the AI agent to perform actions like retrieving sensitive information or installing malware. Zychlinski demonstrated the feasibility of this attack by creating an internal website with both benign and malicious versions, successfully testing it on agents powered by Anthropic’s Claude 4 Sonnet, OpenAI’s GPT-5 Fast, and Google’s Gemini 2.5 Pro.

To counteract this stealthy attack, which is challenging to detect with conventional tools, Zychlinski emphasised the need for a new generation of defences for AI agents. Protecting these agents will require various countermeasures, including obfuscating their browsing session fingerprints to resemble those initiated by humans. Additionally, it is suggested that agents be divided into two roles: the planner, which serves as the brain and does not directly interact with risky data sourced from the web. This approach aims to enhance the security of AI agents in a landscape where not everything is as it appears. 

Similar Posts