Image by Gaining Visuals, from Unsplash
Malicious Google Calendar Invites Can Make ChatGPT Leak Your Emails
A security researcher reported how a fake Google Calendar invitation can steal private email content from ChatGPT when Gmail connectors are enabled.
In a rush? Here are the quick facts:
- Attack works if Gmail and Calendar connectors are enabled in ChatGPT.
- Automatic Google connectors allow ChatGPT to access data without explicit prompts.
- Indirect prompt injection hides malicious instructions inside calendar event text.
Eito Miyamura explained the attack method on X, showing how hackers use calendar invites with hidden instructions, then waits for the victim to ask ChatGPT to perform a task, as first reported by Tom’s Hardware.
The attacker embeds malicious commands in the event, which ChatGPT then executes automatically following the malicious instructions. “All you need? The victim’s email address,” Miyamura claims.
We got ChatGPT to leak your private email data 💀💀
All you need? The victim’s email address. ⛓️💥🚩📧
On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2
— Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025
Tom’s Hardware notes that in mid-August, OpenAI added native Gmail, Google Calendar, and Google Contacts connectors to ChatGPT. After granting permission, the assistant has automatic access to the users’ Google account data. This means even a casual question like “What’s on my calendar today?” can access your calendar.
The help center of OpenAI explains that these connectors activate automatic data access only when enabled, though you can turn it off in settings to select sources manually.
Tom’s Hardware explains that the Model Context Protocol enables developers to create custom connectors, however OpenAI does not monitor these connections. Miyamura highlights this point as this attack depends on a new overall ecosystem.
The attack method, called indirect prompt injection, conceals harmful commands inside authorized data access points, which in this case are text embedded in calendar events. Similar attacks were reported in August, showing how compromised invites could steer Google’s Gemini AI and even control smart-home devices.
The system remains inactive unless Gmail and Calendar services are linked inside ChatGPT. Users who want to minimize risks should disconnect their sources and turn off automatic data access.
Experts advise changing Google Calendar’s “Automatically add invitations” setting so only invites from known contacts appear and hiding declined events.