07/11/2025 | Press release | Distributed by Public on 07/11/2025 16:36
Artificial intelligence is becoming our go-to guide for everything from restaurant recommendations to login pages. But what happens when your AI assistant confidently directs you to a fake website? Recent research from Netcraft reveals a troubling reality: AI models are regularly hallucinating URLs and sending users straight into cybercriminals' hands.
This isn't a hypothetical problem. It's happening right now, at scale, and the implications for cybersecurity aren't great.
When cybersecurity researchers at Netcraft decided to test how well AI models handle simple login requests, the results were alarming. They asked a popular large language model (LLM) straightforward questions like "Where do I log in to [brand name]?" for 50 well-known companies.
The AI provided 131 unique website addresses. Here's the breakdown that should concern every business leader:
This means that 34% of the time, users were directed to sites not owned by the brands they were trying to reach. For cybercriminals, this represents an unprecedented opportunity.
The research uncovered something even more disturbing: AI systems are already recommending active phishing sites. In one documented case, Perplexity - a popular AI-powered search engine - directed users to a fake Wells Fargo login page when asked for the official URL.
The malicious site wasn't buried in search results. It was the top recommendation, presented with the same confidence as any legitimate answer. The real Wells Fargo site appeared below it, making the scam even more convincing.
This incident highlights a critical vulnerability in how we interact with AI. Unlike traditional search engines, AI presents single answers with authority. Users are conditioned to trust these responses, making them prime targets for deception.
AI hallucinations create what security experts call "a perfect storm" for cybercriminals. Here's why this threat is so potent:
When AI models consistently hallucinate the same fake URLs, attackers can simply register those domains and wait for victims to arrive. It's like having a roadmap to confused users who trust AI recommendations.
AI doesn't present URLs with uncertainty. It delivers them with the same confidence as verified information, making users less likely to question the authenticity.
As AI interfaces become the default way people search for information, the potential victim pool grows exponentially. A single hallucinated URL can affect thousands of users.
Regional banks, credit unions, and mid-sized companies face the greatest danger. These organizations are less likely to appear in AI training data, making hallucinations more common and potentially more damaging.
The threat extends beyond simple hallucinations. Cybercriminals are actively poisoning AI training data to manipulate future responses. Researchers discovered a sophisticated campaign where attackers created fake GitHub repositories containing malicious code designed to be consumed by AI development tools.
This supply chain attack on AI training data represents a new frontier in cybercrime. As AI coding assistants become more prevalent, the potential for widespread code poisoning grows significantly.
Many organizations might consider registering all potential hallucinated domains as a defensive measure. However, this approach has fundamental limitations:
AI models can generate unlimited variations of URLs. The combinations are mathematically infinite, making comprehensive domain registration impossible.
As AI models are updated and retrained, they develop new hallucination patterns. What seems comprehensive today may be incomplete tomorrow.
The cost and administrative burden of registering and maintaining thousands of defensive domains can quickly become prohibitive.
While the AI hallucination problem is complex, there are concrete steps organizations can take to protect themselves and their users:
Deploy validation systems that verify URLs before presenting them to users. This might include real-time domain ownership verification and reputation checking.
Establish continuous monitoring for domains that might be registered to impersonate your brand. This includes common misspellings and variations that AI might hallucinate.
Train employees and customers to verify URLs independently, especially for sensitive activities like banking or account access. Emphasize the importance of typing URLs directly or using verified bookmarks.
The AI hallucination problem isn't going away. As these systems become more sophisticated, so too will the methods criminals use to exploit them. Organizations need to prepare for a future where AI-driven threats are just as common as traditional cybersecurity challenges.
The key is finding the right balance between embracing AI's benefits while maintaining robust security practices. This requires ongoing vigilance, continuous monitoring, and a commitment to user education.
As AI becomes increasingly integrated into our daily workflows, the stakes for accuracy continue to rise. Organizations that take proactive steps to address AI hallucinations will be better positioned to protect their users and maintain trust in an evolving digital landscape.
The solution isn't to abandon AI, but to deploy it more thoughtfully. This means implementing proper validation mechanisms, maintaining human oversight for critical decisions, and continuously monitoring for new threats.