Threat actors are now exploiting the rapid adoption of AI agents by deploying malware specifically designed to target the agents themselves. A new campaign, dubbed GhostClaw or GhostLoader, focuses on AI-assisted workflows and GitHub repositories to deliver credential-stealing payloads.
GhostClaw introduces a novel vector in software supply chain attacks. Unlike traditional methods that rely solely on human developers downloading malicious packages, GhostClaw's operators construct traps for AI agents like OpenClaw to trigger autonomously. Upon execution, the malware establishes a persistent Remote Access Trojan (RAT), enabling the harvesting of system credentials, browser data, developer tokens, and cryptocurrency wallets.
This campaign capitalizes on the elevated system permissions developers often grant to local AI agents. GhostClaw highlights the growing trend of bots becoming primary attack surfaces, serving as a critical warning for development teams automating coding tasks with AI frameworks.
Understanding GhostClaw's operation requires insight into AI tool deployment. OpenClaw, an open-source AI agent, functions as an autonomous, always-on coding assistant. Its continuous operation of local models demands substantial compute power, leading to increased Mac Mini sales as developers leverage Apple’s unified memory for these resource-intensive local AI servers. GhostClaw operators meticulously designed their campaign for this specific macOS environment, utilizing native AppleScript and local directories for stealthy background execution.
The attack commences with social engineering. Threat actors stage GitHub repositories impersonating legitimate developer utilities, trading bots, or AI plugins. To evade immediate detection, these repositories remain benign for an incubation period of five to seven days, accumulating stars and followers to establish credibility. Once trust is built, the malicious payload is swapped in. For developers leveraging AI frameworks, the trap is typically embedded within a SKILL.md file. AI agents utilize "skills" for external interactions, such as reading local files, executing shell commands, or managing emails, with the SKILL.md format defining these capabilities.
Within GhostClaw's malicious repositories, the SKILL.md file itself contains no malicious code, presenting benign metadata, dependencies, and commands. However, a multi-stage infection is triggered when an AI agent or human developer follows the repository’s setup instructions, typically by executing an install.sh script or installing dependencies via a package manager.
Researchers observed this behavior in malicious npm packages disguised as OpenClaw installers. The package configuration appears normal, and the exposed source code contains harmless decoy utilities.