Hackers Trick Google Gemini Into Spreading Fake Security Alerts

Image by Solen Feyssa, from Unsplash

Hackers Trick Google Gemini Into Spreading Fake Security Alerts

Reading time: 2 min

Invisible text in emails is tricking Google’s Gemini AI into generating fake security alerts, exposing users to phishing and social engineering risks.

In a rush? Here are the quick facts:

  • Hidden text tricks Gemini into adding fake security alerts to email summaries.
  • Attack needs no links, just invisible HTML and CSS in emails.
  • Google acknowledges the issue, and says fixes are being rolled out.

A new vulnerability in Google’s Gemini was discovered by cybersecurity researchers at 0DIN. The AI tool for Workspace presents a new security flaw which allows attackers to push phishing attacks on users.

The attack works through a technique known as indirect prompt injection.The researchers explain that the attacker embeds hidden instructions inside an email message. It does this by writing it in white or zero-size font.

When the recipient clicks on “Summarize this email,” Gemini reads the invisible command and adds a fake warning to the summary—such as a message claiming the user’s Gmail account has been compromised and urging them to call a number.

Because the hidden text is invisible to the human eye, the victim only sees the AI-generated alert, not the original embedded instruction.

This clever trick doesn’t rely on malware or suspicious links. It uses simple HTML/CSS tricks to make the hidden text invisible to humans but readable by Gemini’s AI system.

Once triggered, Gemini adds messages like: “WARNING: Your Gmail password has been compromised. Call 1-800…”—leading victims to unknowingly hand over personal information.

A Google spokesperson told BleepingComputer that the company is actively reinforcing protections against such attacks: “We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks,”

0DIN’s research underscores a growing issue: AI tools can be manipulated just like traditional software. Until protections improve, users should treat AI-generated summaries with caution—especially those claiming urgent security threats.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback