Researchers Hijack Google Gemini AI To Control Smart Home Devices

Image by Jakub Żerdzicki, from Unsplash

Researchers Hijack Google Gemini AI To Control Smart Home Devices

Reading time: 3 min

Researchers were able to trick Google’s Gemini AI system to experience a security breach via a fake calendar invitation, and remotely control home devices.

In a rush? Here are the quick facts:

  • The attack turned off lights, opened shutters, and started a smart boiler.
  • It’s the first known AI hack with real-world physical consequences.
  • The hack involved 14 indirect prompt injection attacks across web and mobile.

In a first-of-its-kind demonstration, researchers successfully compromised Google’s Gemini AI system through a poisoned calendar invitation, which enabled them to activate real-world devices including lights, shutters, and boilers.

WIRED, who first reported this research, describes how smart lights at the Tel Aviv residence automatically turned off, while shutters automatically rose and the boiler switched on, despite no resident commands.

The Gemini AI system activated the trigger after receiving a request to summarize calendar events. A hidden indirect prompt injection function operated inside the invitation to hijack the AI system’s behaviour.

Each of the device actions was orchestrated by security researchers Ben Nassi from Tel Aviv University, Stav Cohen from the Technion, and Or Yair from SafeBreach. “LLMs are about to be integrated into physical humanoids, into semi- and fully autonomous cars, and we need to truly understand how to secure LLMs before we integrate them with these kinds of machines, where in some cases the outcomes will be safety and not privacy,” Nassi warned, as reported by WIRED.

At the Black Hat cybersecurity conference in Las Vegas, the team disclosed their research about 14 indirect prompt-injection attacks, which they named ‘Invitation Is All You Need,’ as reported by WIRED. The attacks included sending spam messages, creating vulgar content, initiating Zoom calls, stealing email content, and downloading files to mobile devices.

Google says no malicious actors exploited the flaws, but the company is taking the risks seriously. “Sometimes there’s just certain things that should not be fully automated, that users should be in the loop,” said Andy Wen, senior director of security for Google Workspace, as reported by WIRED.

But what makes this case even more dangerous is a broader issue emerging in AI safety: AI models can secretly teach each other to misbehave.

A separate study found that models can pass on dangerous behaviors, such as encouraging murder or suggesting the elimination of humanity, even when trained on filtered data.

This raises a chilling implication: if smart assistants like Gemini are trained using outputs from other AIs, malicious instructions could be quietly inherited and act as sleeper commands, waiting to be activated through indirect prompts.

Security expert David Bau warned of backdoor vulnerabilities that could be “very hard to detect,” and this could be especially true  in systems embedded in physical environments.

Wen confirmed that the research has “accelerated” Google’s defenses, with fixes now in place and machine learning models being trained to detect dangerous prompts. Still, the case shows how quickly AI can go from helpful to harmful, without ever being directly told to.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback