AI browsers scammed by PromptFix attacks run malicious hidden prompts
Cybersecurity researchers have unveiled a new prompt injection technique known as PromptFix. This method deceives a generative artificial intelligence (GenAI) model by embedding malicious instructions within a fake CAPTCHA check on a web page. Guardio Labs describes this technique as an “AI-era take on the ClickFix scam.” The attack demonstrates how AI-driven browsers, such as Perplexity’s Comet, which are designed to automate mundane tasks like online shopping or email management, can be manipulated into engaging with phishing landing pages or fraudulent storefronts without the user’s awareness. Guardio explains that the approach differs from traditional methods, as it does not attempt to glitch the model into compliance. Instead, it misleads the AI by employing social engineering tactics that appeal to its primary goal of assisting users quickly and efficiently.
This new reality, termed Scamlexity—a blend of “scam” and “complexity”—highlights the risks posed by agentic AI systems that can autonomously pursue goals and make decisions with minimal human oversight. With AI-powered coding assistants like Lovable shown to be vulnerable to techniques such as VibeScamming, attackers can effectively manipulate these models into disclosing sensitive information or executing purchases on counterfeit websites masquerading as legitimate retailers like Walmart. A simple instruction, such as “Buy me an Apple Watch,” can lead to significant consequences after a user inadvertently lands on a fraudulent site through various channels, including social media ads or SEO poisoning. Guardio warns that Scamlexity represents a complex new era of scams, where the convenience of AI intersects with an invisible scam landscape, leaving humans as collateral damage.