On April 22, 2026, a package published as @bitwarden/cli@2026.4.0 appeared on npm. It was not published by Bitwarden. It lived on the registry for 90 minutes before being pulled. In that window it was downloaded 334 times. What it deployed was more sophisticated than typical typosquat malware: a multi-cloud credential harvester, a self-replicating npm worm that spread to other packages in the developer's workspace, and a module built specifically to extract secrets from authenticated AI coding assistant sessions. Researchers at Endor Labs documented the campaign as Shai-Hulud.
The AI assistant targeting is the detail that matters most for security teams to understand. It is new. It signals that attackers have mapped the authentication surfaces of modern developer tooling and identified AI coding sessions as a high-value, undertargeted credential store.
The Self-Replicating Worm Component
After initial execution, the malicious package scanned the victim's local npm workspace for other packages they maintained. For each package found with a package.json indicating publish rights, the worm injected a copy of its postinstall payload into that package's source tree and attempted to publish the infected version to npm using whatever credentials were available in the local environment. This is the worm behavior: each infected developer becomes a potential propagation node, turning their own trusted packages into delivery mechanisms.
The design is efficient. Developers who install packages for their own projects are trusted by their downstream users. A worm that spreads through maintainer accounts does not need to compromise npm's infrastructure. It spreads through the social trust graph of the package ecosystem itself.
334 initial downloads understates the total exposure. Each infected developer who had npm publish credentials in their environment became a potential source of further infection. Any packages they maintained and published after infection should be treated as suspect.
Targeting AI Coding Assistant Sessions
The component targeting AI coding assistants is documented as the first known malware built specifically for this purpose. The module targeted authenticated sessions for Claude Code and GitHub Copilot. The mechanism varied by tool:
- For Claude Code: targeted the local session token stored in
~/.claude/and associated API key references in environment or config files - For GitHub Copilot: targeted authentication tokens stored in VS Code's secret storage and the GitHub CLI credential store
- For both: attempted to read in-memory context from running extension processes where accessible
The rationale is straightforward. Developers authenticate AI coding assistants with high-privilege tokens, often tied to organizational accounts with broad repository access. They leave those sessions running continuously. The tokens are not rotated frequently. From an attacker's perspective, a Claude Code or Copilot session token can be more valuable than a plain GitHub token because it may provide broader context and access patterns.
What This Means for Development Teams
The six credential surfaces targeted in this campaign map directly to how modern developers authenticate their tooling: npm tokens, AWS credentials, Azure service principal credentials, GCP service account keys, Claude Code session tokens, and GitHub Copilot tokens. Any developer machine running an authenticated AI coding assistant is now explicitly in scope for targeted malware.
The immediate operational response: treat AI assistant session tokens like any other long-lived credential. Do not leave sessions authenticated indefinitely on shared or less-secured developer machines. Use short-lived tokens where the platform supports it. Audit what credential surfaces exist on developer endpoints and ensure endpoint detection covers those paths.
The legitimate Bitwarden CLI is published as @bitwarden/cli by the verified Bitwarden organization. Always verify the publisher identity on npm before installing any package that mimics a well-known security tool. Security tooling is a high-value target for typosquat attacks precisely because it is trusted.