
Image by Solen Feyissa, from Unsplash
AI Hallucinations Are Now A Cybersecurity Threat
The study shows AI chatbots recommend fake login pages to millions of users, which puts them at risk of phishing and fraud, under the guise of helpful responses.
In a rush? Here are the quick facts:
- 34% of AI-suggested login URLs were fake, unclaimed, or unrelated.
- Perplexity AI recommended a phishing site instead of Wells Fargo’s official login.
- Criminals are optimizing phishing pages to rank in AI-generated results.
In their study, the cybersecurity firm Netcraft tested a popular large language model (LLM) by asking where to log into 50 well-known brands. Of the suggested 131 website links, 34% of these were wrong, with inactive or unregistered domains making up 29%, and unrelated businesses accounting for 5%.
This issue isn’t theoretical. In one real example, the AI-powered search engine Perplexity displayed a phishing site to a user who searched the Wells Fargo login page. A fake Google Sites page imitating the bank appeared at the top of search results, while the authentic link was hidden below.
Netcraft explained “These were not edge-case prompts. Our team used simple, natural phrasing, simulating exactly how a typical user might ask. The model wasn’t tricked—it simply wasn’t accurate. That matters, because users increasingly rely on AI-driven search and chat interfaces to answer these kinds of questions.”
As AI becomes the default interface on platforms like Google and Bing, the risk grows. Unlike traditional search engines, Chatbots present information clearly and with confidence, which leads users to trust their answers, even when the information is wrong.
The threat doesn’t stop at phishing. Modern cybercriminals optimize their malicious content for AI systems, which results in thousands of scam pages, fake APIs, and poisoned code that slip past filters and end up in AI-generated responses.
In one campaign, attackers created a fake blockchain API, which they promoted through GitHub repositories and blog articles, to trick developers into sending cryptocurrency to a fraudulent wallet.
Netcraft warns that registering fake domains preemptively isn’t enough. Instead, they recommend smarter detection systems and better training safeguards to prevent AI from inventing harmful URLs in the first place.