Hackers Can Weaponize AI Summarizers to Spread Malware

Image by Nahel Hadi, from Unsplash

Hackers Can Weaponize AI Summarizers to Spread Malware

Reading time: 2 min

A new cybersecurity research has exposed a concerning way in which hackers can use AI summarizers to spread ransomware attacks.

In a rush? Here are the quick facts:

  • Invisible CSS tricks make payloads unreadable to humans but visible to AI.
  • Prompt overdose floods AI with repeated commands to control output.
  • Summarizers may unknowingly deliver ransomware steps to unsuspecting users.

Researchers at CloudSek explain that via a ClickFix social engineering attack, hackers can embed harmful instructions within documents using invisible coding techniques. The hidden text remains invisible to human eyes but AI summarizers can detect it, and when they generate summaries, they may unknowingly pass on the dangerous instructions to users.

“A novel adaptation of the ClickFix social engineering technique has been identified, leveraging invisible prompt injection to weaponize AI summarization systems,” researchers said.

The attack depends on CSS-based obfuscation which uses zero-width characters and white-on-white text and tiny fonts and off-screen placement to hide instructions. The summarizer detects the invisible code which remains invisible to human eyes.

The attackers implement prompt overdose by adding numerous commands to hidden sections which causes the AI to select these commands first in its output.

“When such crafted content is indexed, shared, or emailed, any automated summarization process that ingests it will produce summaries containing attacker-controlled ClickFix instructions,” the study explained.

During testing, researchers demonstrated how an AI summarizer could be manipulated to instruct users to run dangerous PowerShell commands.  While the test version was harmless, a real attack could easily launch ransomware.

The risk level of this attack is high because AI summarizers are integrated into email clients and browsers and workplace applications. The researchers point out that many people trust the summaries without reading the original document, making it easier for attackers to exploit that trust.

Experts recommend defenses such as stripping out hidden text before content is summarized, filtering suspicious prompts, and warning users if summaries contain step-by-step instructions.

Without these protections, AI could unintentionally become “an active participant in the social engineering chain.”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback