How to Fix the TanStack Supply Chain Attack

Learn how to fix the TanStack supply chain attack with clean version pins, credential rotation, package release cooldowns, split publish workflows, and runtime authorization.

Published 2026-05-12.

To fix the TanStack supply chain attack, treat affected hosts as compromised, pin clean package versions, preserve evidence before rotating reachable credentials, add package release cooldowns, split publish workflows away from install and test jobs, and move publish or provider credentials behind action-level runtime authorization. The TanStack npm incident was not only a dependency compromise. The dependency was the entry point; identity and access made it devastating.

The lesson for security teams is direct: supply chain security and identity security are now the same control plane. If arbitrary third-party code can run next to long-lived secrets, broad workflow permissions, AI agent configs, and publish credentials, then every compromised dependency becomes an identity compromise. Kontext addresses this class of failure with runtime authorization, least-privilege enforcement for agents, and scoped credential brokering that moves authority to the moment of action.

What happened in the TanStack npm attack?

On May 11, 2026, TanStack confirmed that an attacker published 84 malicious versions across 42 @tanstack/* npm packages. The confirmed TanStack attack chain combined a dangerous pull_request_target workflow pattern, GitHub Actions cache poisoning across a fork-to-base trust boundary, and runtime extraction of an OIDC token from the GitHub Actions runner process. TanStack's postmortem says no npm tokens were stolen and the npm publish workflow itself was not directly compromised.

The broader Mini Shai-Hulud wave continued beyond TanStack. Aikido reported 373 malicious package-version entries across 169 npm package names, including @tanstack, @mistralai, @uipath, and other scopes. SafeDep reported a wider coordinated campaign involving more than 170 npm packages and two PyPI packages, including Mistral AI SDK packages and Guardrails AI.

Those numbers matter, but the architecture matters more. The attacker did not need every maintainer's password. The attacker needed a place where untrusted code could run with trusted identity material in reach.

The short version

The TanStack attack worked because install-time code executed inside trusted environments and found usable credentials. The packages were the delivery mechanism; ambient identity was the blast radius.

For defenders, the durable fix is not "never install a bad package." That is impossible at scale. The durable fix is to make sure a bad package finds as little authority as possible: no long-lived secrets on disk, no broad tokens in the same process space, no direct publish authority during test or install steps, and no credential issuance without a runtime policy decision.

That is the same security principle behind tool invocation privilege boundaries: separate the code that proposes an action from the authority that lets the action happen.

How the attack chain worked

The public TanStack postmortem describes a three-part chain.

First, the attacker opened a pull request against TanStack's router repository. A workflow using pull_request_target checked out and built pull request code while running in the base repository's privileged context. GitHub Security Lab has warned for years that combining pull_request_target with checkout or execution of untrusted pull request code can lead to repository compromise.

Second, the attacker poisoned the GitHub Actions cache. The malicious pull request did not need to merge. It only needed to cause a cache entry to be saved under a key that a later trusted workflow would restore.

Third, when a legitimate merge triggered the release workflow, the poisoned cache was restored into a trusted run. The malicious code executed in a workflow with id-token: write, extracted the OIDC token from runner memory, and used that trusted publishing path to publish malicious npm versions. This is why TanStack could accurately say that no npm token was stolen: the workflow minted the publish authority at runtime.

What the compromised packages did

The TanStack packages did not need to show an obvious modified source file in the repository. The published tarballs contained an optionalDependencies entry that resolved @tanstack/setup from a specific GitHub commit. That dependency's prepare lifecycle script ran the payload through Bun during package installation.

If a developer or CI system installed an affected version, the payload looked for credentials and configuration in places attackers know to check: GitHub tokens, npm tokens, SSH keys, cloud credentials, Kubernetes service account tokens, Vault tokens, .npmrc, GitHub Actions OIDC material, IDE configuration, and AI coding-agent configuration.

SafeDep also reported propagation behavior aimed at developer tools: poisoned .claude and .vscode configuration files, GitHub GraphQL commits to victim repositories, and token patterns such as ghp_, gho_, ghs_, and npm_. That moves the incident beyond conventional package theft. It becomes a developer identity and automation compromise.

Why identity made the dependency attack worse

The supply chain vector explains how the malware got in. Identity explains what it could do after it was inside.

LayerWhat failedWhy it mattered
Dependency trustInstall-time lifecycle code ran during npm install.A package install became arbitrary code execution.
CI trustA privileged workflow restored poisoned cache content.Attacker-controlled code ran inside a trusted release environment.
OIDC trustid-token: write was available to the workflow run.The payload could mint publish authority without stealing an npm token.
Secret storageCredentials lived in predictable local and CI locations.The payload could harvest identity material immediately.
Developer toolingAI agent and IDE configs were writable.Stolen GitHub authority could create persistence and spread to other developers.

This is the same failure mode that shows up in AI agent security. An agent, workflow, or install script is just code until it reaches an identity boundary. Once it can obtain credentials, call APIs, publish packages, push commits, read cloud metadata, or modify configuration, the security question changes from "is this code trusted?" to "should this exact action be allowed right now?"

Trusted publishing helped, but it was not enough

npm trusted publishing is still directionally correct. The npm documentation describes it as an OIDC-based way to publish packages without long-lived npm tokens. That is better than storing a permanent publish token in CI secrets.

The TanStack attack shows the next boundary. Short-lived tokens reduce standing secret risk, but they do not automatically prove that the right code path requested the token for the right purpose. In this case, the workflow had the ability to request OIDC identity, and malicious code reached that ability before the legitimate publish step.

The missing control is action-level authorization: not simply "may this workflow publish," but "may this step, after these checks, for this package, from this source, in this run, request publish authority now?"

This also exposes a provenance blind spot. Provenance can truthfully say that a package came from the official workflow while still failing to prove that the workflow executed the intended code path. Provenance proves pipeline origin. It does not prove action intent, cache integrity, lifecycle-script safety, or that the credential was minted only for the legitimate publish step.

That is runtime authorization applied to CI/CD.

The LiteLLM pattern

This incident belongs to the same family as the LiteLLM supply chain compromise: arbitrary third-party code ran inside a trusted environment and found credentials that were too available, too broad, or too durable.

QuestionLiteLLM-style compromiseTanStack compromise
Entry pointThird-party code execution through the build or install path.Poisoned Actions cache and install-time npm payloads.
What the attacker wantedDeveloper, cloud, repository, and package credentials.Developer, cloud, repository, npm, OIDC, IDE, and agent credentials.
Why it spreadStolen credentials created the next publish or repository action.Runtime-minted OIDC and harvested tokens enabled publication and repository poisoning.
Core failureSecrets and authority were reachable by code that should not have had them.Secrets and authority were reachable by code that should not have had them.

The dependency is the initial exploit surface. The real impact comes from what the environment lets the dependency do next.

Three controls that would have reduced the blast radius

1. Remove long-lived secrets from environments that execute third-party code

Developer machines and CI runners regularly execute third-party code: npm install, pip install, package lifecycle hooks, build scripts, test runners, bundlers, AI agent tools, and editor extensions.

Those environments should not contain durable credentials that can publish packages, deploy infrastructure, read production data, or push to repositories. If a secret exists at install time, assume malware can read it.

This is exactly why Kontext's credential broker for AI coding agents replaces raw provider keys with placeholders and resolves short-lived credentials only during a governed session.

2. Treat workflow steps and agents as identities

The workflow run is too coarse as an identity boundary. A test step, cache restore, package install, release build, and publish step should not all be treated as one actor.

Security teams need identities for the actual actor and action: which workflow, which step, which package, which repository, which branch, which requested credential, and which policy allowed it. The same applies to AI agents. A coding agent should not inherit a developer's entire GitHub authority because it needs to open one pull request.

For the agent side of this problem, see The API Key is Dead: A Blueprint for Agent Identity in the age of MCP.

3. Authorize credential use at runtime

Credential issuance should happen at the last responsible moment, after policy has enough context to decide whether the action fits the task.

A publish token should be valid only for the publish step, for a specific package, after required checks, from the expected source, and for a short time. A GitHub credential for an AI coding agent should be valid only for the repository and operation it is allowed to perform. A cloud credential should be valid only for the approved action, not every API the human operator can reach.

This is the core of securing LLM tool use with runtime policies: let code propose an action, but require an external policy layer to authorize the side effect.

4. Split build, test, and publish trust zones

Teams can reduce risk today even before CI providers offer true step-scoped OIDC. Do not give id-token: write to install, test, lint, or build jobs. Put publishing in a minimal separate job that runs after tests pass, consumes immutable build artifacts, avoids restoring dependency caches from untrusted contexts, and uses environment protection or manual approval for sensitive releases.

The goal is simple: untrusted code may be able to run during install or test, but it should not be running in the same trust zone that can mint publish authority.

What teams should do now

If your organization installed affected TanStack, Mistral, UiPath, OpenSearch, or related packages during the attack window, treat the host as potentially compromised. TanStack recommends rotating AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH credentials reachable from the install host.

At minimum:

  1. Check lockfiles and package manager caches for affected versions listed in the TanStack, Aikido, and SafeDep advisories.
  2. Pin to known clean versions rather than relying on floating ^ or ~ ranges for critical build-time dependencies.
  3. Review recent GitHub commits, branches, Actions workflows, and package publishes for unexpected activity.
  4. Look for suspicious .claude, .vscode, GitHub Actions, npm, and package-manager artifacts.
  5. Isolate affected hosts and preserve evidence before revoking or rotating credentials when active malware may still be running.
  6. Rotate reachable credentials after evidence is preserved and the host is contained.
  7. Add package release cooldowns, registry proxy policies, and install-script controls where possible.
  8. Separate untrusted PR processing from privileged release workflows.
  9. Remove standing secrets from developer and CI environments that execute third-party code.

Lifecycle scripts deserve special attention. preinstall, install, postinstall, prepare, git dependencies, tarball dependencies, and exotic sub-dependencies are all code execution surfaces. A package release cooldown helps with fast-detected malicious releases, but install-script allowlists and registry proxy rules are the controls that stop unexpected lifecycle code from running in the first place.

The remediation is not just dependency hygiene. It is credential hygiene, workflow isolation, and runtime authorization.

Add a package release cooldown

A minimum release age would not have fixed the poisoned publisher workflow, but it would have protected many consumers from installing a malicious package during the first hours of the campaign. For fast-moving npm malware, a 24-72 hour delay gives maintainers, registries, security vendors, and downstream scanners time to detect and remove bad versions before they enter developer machines or CI.

The exact key is different for each package manager, so verify your version's docs before writing config:

Package managerConfig file3-day cooldown setting
npm v11.10+.npmrcmin-release-age=3
pnpmpnpm-workspace.yamlminimumReleaseAge: 4320
Yarn modern.yarnrc.ymlnpmMinimalAgeGate: "3d"
Bunbunfig.toml[install] minimumReleaseAge = 259200

If you use private workspace packages that publish and install immediately, add explicit exemptions for your trusted scopes rather than turning the cooldown off globally. For example, pnpm supports minimumReleaseAgeExclude, Yarn supports npmPreapprovedPackages, and Bun supports minimumReleaseAgeExcludes.

This is a good task for a coding agent, but the prompt should force it to check current docs and detect the package manager first:

Find my package manager (bun, pnpm, npm, or yarn) and configure a 3-day minimum-release-age or dependency cooldown for installs to blunt supply-chain attacks. Exempt my workspace scopes. Verify the exact config key in current docs before writing.

Where Kontext fits

Kontext is built around a simple premise: credentials should not be ambient. An AI agent or automated tool should receive authority only when a runtime policy decides that the current actor, action, resource, task, and session are allowed.

For AI coding agents, Kontext CLI provides local Guard visibility and hosted governed sessions. It can replace raw provider keys with .env.kontext placeholders, resolve short-lived scoped credentials, and preserve tool-call traces that show who initiated a session, which tools were used, and which credentials were involved.

That does not mean a runtime authorization layer can prevent every malicious dependency from executing. It means the dependency should not find durable authority waiting for it. If malicious code cannot read a standing GitHub token, cannot mint a publish token from the wrong step, and cannot obtain provider credentials without a policy decision, the blast radius collapses.

The TanStack attack is a warning for CI/CD and AI agent security at the same time. Both are systems where software acts on behalf of people. Both need scoped credentials, action-level policy, and audit trails. Both fail when possession of a token becomes the entire authorization model.

FAQ

What happened in the TanStack npm supply chain attack?

On May 11, 2026, an attacker published malicious versions across TanStack npm packages by chaining a pull_request_target workflow issue, GitHub Actions cache poisoning, and runtime extraction of an OIDC token from a release runner. TanStack confirmed 84 malicious versions across 42 @tanstack/* packages.

Was an npm token stolen in the TanStack attack?

TanStack's postmortem says no npm tokens were stolen. The attacker abused the workflow's trusted publishing path: malicious code running inside the release environment minted authority through OIDC and published directly to npm.

Why is this an identity security problem?

It is an identity security problem because the malware's impact depended on the credentials and permissions available in developer and CI environments. The dependency delivered code execution, but credentials enabled publishing, exfiltration, GitHub commits, and propagation.

How can runtime authorization help with supply chain attacks?

Runtime authorization cannot make all dependencies safe, but it can reduce blast radius. It forces sensitive actions such as credential issuance, package publishing, repository writes, cloud access, exports, and external sends through a policy decision at execution time.

What should I do if I installed an affected package?

Treat the host as potentially compromised. Check lockfiles and caches, isolate the machine or runner, preserve evidence, rotate reachable credentials, review recent repository and package activity, and inspect .claude, .vscode, GitHub Actions, npm, and cloud credential artifacts.

References

Related reading

Back to Articles