Threat Actors Manipulating AI to ‘Enhance All Stages’ of Malicious Attacks, Google Threat Intelligence Group Warns

Key Takeaway:

New research by Google’s Threat Intelligence Group (GTIG) (published November 6, 2025) shows that threat actors—both state-sponsored and criminal—are moving beyond productivity uses of AI to deploy LLM-enabled malware that rewrites itself mid-execution. At least five distinct malware families—spanning Russia’s APT28 and North Korea’s UNC1069—now use models like Gemini and Qwen2.5-Coder for “just-in-time” code creation, obfuscation, and operational support across the full attack lifecycle (recon → C2 → exfiltration). This marks a new operational phase of AI abuse. (GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools, Nov 2025)

Contemplating Code and Connection - Threat actors manipulating AI to enhance all stages of malicious attacks (Image Credit - ChatGPT, The AI Track)
Contemplating Code and Connection - Threat actors manipulating AI to enhance all stages of malicious attacks (Image Credit - ChatGPT, The AI Track)

Threat Actors Use AI for Adaptive Malware – Key Points

  • Five AI-Powered Malware Families Identified

    GTIG’s November 2025 report catalogs FRUITSHELL, PROMPTFLUX, PROMPTLOCK, PROMPTSTEAL, and QUIETVAULT—malware developed and deployed by threat actors that integrate AI at runtime. These tools call external models (Gemini, Qwen2.5-Coder-32B-Instruct) to generate or obfuscate code on demand, a pattern GTIG names “just-in-time AI.” The approach allows continuous self-modification that evades signature-based detection while reducing static forensic artifacts.

  • PROMPTFLUX and PROMPTSTEAL: AI in Live Operations

    PROMPTFLUX (VBScript dropper, experimental) includes a “Thinking Robot” module that sends hard-coded API requests to Gemini 1.5 Flash (-latest) for code obfuscation and evasion, logs responses to %TEMP%\\thinking_robot_log.txt, writes regenerated code to Windows Startup, and spreads through removable drives and network shares. Variants include a “Thinging” function that rewrites source code hourly. PROMPTSTEAL (Python data miner, operational), linked to APT28/FROZENLAKE, queries Qwen2.5-Coder-32B-Instruct via the Hugging Face API to generate Windows commands for system enumeration and document exfiltration—showing how threat actors now use generative AI to automate intrusion steps.

  • Additional LLM-Driven Families and Tools

    FRUITSHELL (PowerShell reverse shell), PROMPTLOCK (Go ransomware proof-of-concept), and QUIETVAULT (JavaScript credential stealer) all demonstrate how threat actors are embedding LLMs into active malware. PROMPTLOCK uses AI to build Lua payloads at runtime; QUIETVAULT targets GitHub/NPM tokens, exploiting on-host AI CLIs to extract secrets.

  • North Korean Group UNC1069 Exploits Gemini for Crypto Theft

    UNC1069 (Masan) misused Gemini to research cryptocurrency concepts, locate wallet data, and craft multilingual phishing lures. This threat actor also developed fake update scripts to steal credentials and used deepfake images to distribute a BIGMACHO backdoor disguised as a Zoom SDK.

  • Broader State-Actor Misuse: China, Iran, Russia, DPRK

    GTIG highlights China-, Iran-, and DPRK-linked threat actors using Gemini for reconnaissance, phishing, lateral movement, and data theft. Groups like APT41, TEMP.Zagros, APT42, and UNC4899 exploit LLMs for tool development, SQL data extraction, and C2 obfuscation, often posing as students or researchers to bypass AI guardrails.

  • Social Engineering to Bypass AI Safeguards

    Many threat actors exploit social pretexts—claiming to be in a CTF or academic project—to persuade AI models to disclose blocked exploitation guidance. GTIG warns that these techniques reveal increasing sophistication in prompt manipulation.

  • Illicit AI Tooling Marketplace Matures (2025)

    Underground forums now sell AI-powered hacking suites with subscription pricing, API access, and Discord support, mirroring legitimate SaaS models. Such tools enable low-skill threat actors to perform phishing, vulnerability research, and deepfake generation with minimal expertise.

  • AI Models as External Malware Engines

    Offloading logic to remote AI models allows environment-aware regeneration, complicating attribution and weakening static defenses. GTIG identifies this as a paradigm shift toward “goal-driven,” API-mediated malware design.

  • Google’s Countermeasures and Forward Look

    Google has disabled abusive accounts, refined prompt filters, and increased API monitoring. It also promotes its Secure AI Framework (SAIF) and agents like Big Sleep (detecting vulnerabilities) and CodeMender (auto patching with Gemini reasoning) to counter evolving threat actors.


Why This Matters:

AI-assisted malware now adapts dynamically to targets, scaling through APIs and marketplace tools. Threat actors can blend social engineering and model steering to evade detection, forcing defenders to rely on behavioral analysis, LLM telemetry, and API-level monitoring. The report confirms a structural turning point—AI is now a force multiplier for cyber offense.


This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.

OpenAI and Anthropic cross-tested GPT and Claude models, detailing sycophancy, misuse cooperation, jailbreaks, and refusal–accuracy trade-offs.

Read a comprehensive monthly roundup of the latest AI news!

The AI Track News: In-Depth And Concise

Scroll to Top