AI Agents’ Most Downloaded Skill Is Discovered to Be an Infostealer

AI Agents’ Most Downloaded Skill Is Discovered to Be an Infostealer | Hudson Rock

AI Agents’ Most Downloaded Skill Is Discovered to Be an Infostealer

By Hudson Rock Intelligence Team | February 6, 2026

In a sophisticated intersection of AI hype and malicious intent, a new threat has emerged targeting developers and AI power-users. Recent research from Jason Meller and the security team at 1Password has highlighted a campaign involving a fraudulent VS Code extension that impersonates “Moltbot,” a popular AI coding assistant.

The attack is not merely about stealing credentials. It signals a shift toward what researchers are calling “Cognitive Context Theft.” This involves the exfiltration of “memories,” transcripts, and environment configurations that AI agents use to operate within a corporate perimeter.

Malware Scan for Fake Moltbot Extension Figure 1: Malware scan results identifying the malicious payload bundled with the fake VS Code extension.

The Anatomy of an AI Leak: ClawdBot Analysis

Our analysis at Hudson Rock confirms that “Local-First” AI agents like ClawdBot introduce a massive “honey pot” for commodity malware. These agents often store sensitive memories and authentication tokens in plaintext Markdown and JSON files. Unlike encrypted browser stores, these files are readable by any process running as the user.

For infostealers, files like MEMORY.md provide a psychological dossier of the user. You can find our full technical breakdown in the article ClawdBot: The New Primary Target for Infostealers in the AI Era.

~/clawd/memory/memory.md
## Session Summary: 2026-01-24 User asked to save VPN configuration for remote work. VPN_GATEWAY: vpn.corporatenet.com (Cisco AnyConnect) VPN_GROUP_KEY: “T3chC0rp_Rem0te_Users” VPN_STATIC_PASS: “Winter2026!Secure” User mentioned: “I need to log into the Change Healthcare portal by 9 AM.” # Note: This context reveals the target organization.
~/.clawdbot/clawdbot.json
// The Master Config: Controls the Agent Gateway { “gateway”: { “port”: 3000, “auth”: { “mode”: “token”, “token”: “cl_live_99283…RCE_RISK…” } } } // If stolen, the attacker can execute remote shell commands.

Real World Precedents: From Memory to Breach

Why is a text file containing VPN or Gateway keys so dangerous? History shows that single compromised credentials are the root cause of the largest cyber breaches in recent history.

Case Study 1: VPN Compromise Ransomware

The $22 Million Key: Change Healthcare

In 2024, the Change Healthcare ransomware attack resulted in a staggering $22,000,000 payout. The entry point? A single compromised Citrix/VPN credential found on an employee’s machine infected by an infostealer.

HUDSON ROCK CAVALIER: Compromised Cisco VPN Credentials
Cisco (CSCOE)
2,500+
Fortinet
1,800+
Case Study 2: Collaboration Tools Data Leak

The Atlassian & Jira Attack Surface

Incidents involving Hy-Vee (53GB of data stolen) and Jaguar Land Rover demonstrate the catastrophe of compromised collaboration credentials. Storing API tokens in tools.md grants attackers the keys to the entire corporate knowledge base.

Victim Asset Stolen Impact
Hy-Vee Atlassian Cloud Credentials 53GB Data Heist
Jaguar Land Rover Jira Access Token Hellcat Ransomware Entry

How Infostealers are Adapting

Major Malware-as-a-Service (MaaS) families are already evolving to target these structures. RedLine uses modular “FileGrabbers” to sweep for .clawdbot configs, while Lumma employs heuristics to find anything named “secret” or “config” in AI directories.

{
  “target”: “ClawdBot”,
  “paths”: [
    “%USERPROFILE%/.clawdbot/clawdbot.json”,
    “%USERPROFILE%/clawd/memory/*.md”
  ],
  “regex”: “(auth.token|sk-ant-|jira_token)”
}
“If an attacker compromises the same machine you run an AI agent on, they don’t need to do anything fancy. Modern infostealers scrape common directories and exfiltrate everything that looks like credentials, tokens, session logs, or developer config.” — Jason Meller, 1Password

Mainstream Attention: Elon Musk Weighs In

The incident has caught the attention of the broader tech community. Elon Musk commented on the inherent risks of deeply integrated AI tools that lack a proper security sandbox. The vulnerability of “Agentic” workflows, where AI has the power to read and write to the system, becomes a high-impact control point for attackers.

Elon Musk's Comment on the 1Password Report Figure 2: Discussion on the systemic risks of AI agent hijacking and credential exfiltration.

Is your organization at risk? Organizations can use Hudson Rock’s Free Tools to identify if any employee credentials or developer tokens have been compromised by these evolving infostealer campaigns.

Don’t Stop Here

More To Explore

BE THE FIRST TO KNOW

Get FREE access to Cavalier GPT

Stay informed with the latest insights in our Infostealers weekly report.

Receive a notification if your email is involved in an Infostealer infection.

No Spam, We Promise

BE THE FIRST TO KNOW

Get FREE access to Cavalier GPT

Stay informed with the latest insights in our Infostealers weekly report.

Receive a notification if your email is involved in an Infostealer infection.

No Spam, We Promise