
Image by Xavier Cee, from Unsplash
New Malware Uses GPT-4 To Generate Attacks On The Fly
Security researchers have found early evidence of malware that uses large language models (LLMs) to generate malicious actions on the fly.
In a rush? Here are the quick facts:
- Researchers found malware using LLMs to generate code at runtime.
- Malware dubbed MalTerminal used GPT-4 to build ransomware and shells.
- Traditional antivirus tools struggle to detect runtime-generated malicious code.
The findings were presented at LABScon 2025 in a talk titled “LLM-Enabled Malware In the Wild.”
According to SentinelLABS, “LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code.”
These threats operate through execution-based methods which make them impossible for standard antivirus systems to detect because the harmful code does not exist until execution time.
The team identified what they believe may be the earliest case of this kind of malware, which they dubbed ‘MalTerminal’. The system based on Python employs GPT-4 API from OpenAI to generate ransomware attacks and reverse shell attacks.
The researchers documented additional offensive tools, which included vulnerability injectors, and phishing aids to show how attackers experiment with LLMs.
“On the face of it, malware that offloads its malicious functionality to an LLM that can generate code-on-the-fly looks like a detection engineer’s nightmare,” the researchers wrote.
Other cases include ‘PromptLock’, which first emerged as an AI-based ransomware in 2023, and PROMPTSTEAL, a malware connected to the Russian group APT28. The researchers explain that PROMPTSTEAL embedded 284 HuggingFace API keys and used LLMs to produce system commands for stealing files.
Researchers found that despite their sophistication, LLM-enabled malware must include “embedded API keys and prompts,” leaving traces that defenders can track. They wrote, “This makes LLM enabled malware something of a curiosity: a tool that is uniquely capable, adaptable, and yet also brittle.”
For now, the use of LLM-enabled malware appears rare and mostly experimental. But experts warn that as adversaries refine their methods, these tools could become a serious cybersecurity threat.