At 10:39 UTC on March 24, a poisoned version of litellm landed on PyPI. No corresponding tag on GitHub. No release notes. Just a direct upload using stolen credentials.

litellm is an LLM API proxy. It sits between your code and providers like OpenAI, Anthropic, and Google, routing requests across models. By design, it holds API keys for every provider in your stack. It has 97 million downloads per month.

Within 13 minutes, a second poisoned version followed. Within 33 minutes, it was quarantined. In between, anyone who ran pip install litellm - or installed any package that depended on it - had their machine silently ransacked.

The haul: SSH keys, cloud credentials, Kubernetes configs, git credentials, environment variables, shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords.

The Credential Chain

This wasn’t a lone actor exploiting a typo. It was the final move in a multi-week campaign by a threat actor called TeamPCP. The entry point: Trivy, Aqua Security’s open-source vulnerability scanner.

On February 27, the attacker exploited a pull_request_target workflow to steal a privileged access token. Aqua’s credential rotation was incomplete. The attacker kept access.

Over the next three weeks, TeamPCP hopped through the ecosystem:

  • Mar 19: Poisoned Trivy tags pushed. The payload scraped CI runner process memory for secrets.
  • Mar 20: 45+ npm packages compromised via stolen tokens. A self-propagating worm.
  • Mar 22: Docker Hub images pushed directly. 44 Aqua internal repos defaced.
  • Mar 23: Checkmarx KICS GitHub Action - all 35 tags hijacked.
  • Mar 24: litellm’s CI/CD pipeline, which used the compromised Trivy for vulnerability scanning, had its PYPI_PUBLISH token extracted from runner memory.

A security scanner compromised an API key gateway. Two tools that orgs trust with their broadest credential access.

The compromise originated from our CI/CD pipeline using Trivy for vulnerability scanning. The malicious Trivy version stole our PyPI credentials.

— litellm maintainer, Hacker News

The Payload

Version 1.82.8 dropped a file called litellm_init.pth into the wheel. Python’s site.py processes .pth files at interpreter startup, meaning the payload executes on every Python invocation. You don’t need to import litellm. You don’t need to run it. Just having the package installed is enough.

The credential harvester was 332 lines. It crawled six directories deep for .env files, grabbed every cloud provider’s credential store, dumped Kubernetes secrets across all namespaces, and collected crypto wallet keys. Everything was encrypted and sent to a domain impersonating litellm infrastructure.

In Kubernetes environments, it went further: spinning up privileged pods on every node, mounting the host filesystem, and installing a persistent backdoor as a systemd service that polled for new payloads every 50 minutes.

Transitive dependencies

litellm is a transitive dependency for dozens of AI frameworks. Running pip install dspy would pull in the compromised version. Same for any MCP plugin or AI tool that used litellm under the hood. You didn’t need to know litellm existed to be affected.

Caught by a Bug

Here’s the part that should keep you up at night: the attack was discovered by accident.

The .pth launcher spawned a child Python process. But because .pth files activate on every interpreter startup, the child re-triggered the same .pth, which spawned another child, which triggered again. An exponential fork bomb. A bug in the malware.

Callum McMahon at FutureSearch was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When 1.82.8 installed, his machine ran out of RAM and crashed. That crash led to investigation, which led to discovery.

Karpathy put it bluntly: if the attacker hadn’t “vibe coded” this attack, it could have gone undetected for days or weeks.

28 spam comments were posted in 43 seconds to the GitHub disclosure issue to bury the report. The issue was closed as “not planned” - likely by the attacker using the compromised account. This was a sophisticated, coordinated operation that stumbled on a fork bomb.

The Bigger Problem

Karpathy’s reaction went beyond the specific incident. His argument: the dependency model itself is broken.

Every pip install pulls in a tree of packages. Each one has maintainers, CI pipelines, and publishing credentials that can be compromised. One break anywhere in the chain is enough. And AI development is making the trees deeper. MCP plugins, AI SDKs, agent frameworks: each one adds dozens of transitive packages you’ve never audited and probably don’t know exist.

Two weeks ago, I wrote about how your AI tools are the attack surface: prompt injection through pull requests, GitHub Issues, and CI/CD pipelines. That was about AI tools being tricked by malicious input. This is the other side: the packages those tools depend on being poisoned at the source. The MCP plugin in Cursor that pulled in litellm wasn’t vulnerable to prompt injection. It was vulnerable to something older: trusting its dependencies.

This is also the tension from Buy vs Build Just Flipped. The cost of building collapsed thanks to AI tools. But the cost of dependencies isn’t just license fees and integration anymore. It’s security exposure. Every package is a trust relationship with every person and pipeline in its tree.

Karpathy’s suggestion: use LLMs to “yoink” functionality when it’s simple enough. Instead of pip install some-package for a utility function, ask an AI to write it directly. The code is yours to audit.

The math changed

Minimizing dependencies has always been good practice. What changed is the alternative. “Just write it yourself” used to mean hours of work for anything non-trivial. Now it’s minutes. The calculus genuinely shifted.

What to Actually Do

  • Pin and hash. Pin dependencies to exact versions with hash verification. pip install litellm>=1.64.0 is what made dspy vulnerable. Exact pins with hashes would have required an explicit update.
  • Audit your transitive tree. Know what you’re actually installing. pip install your-mcp-plugin might pull in 200 packages you’ve never heard of.
  • Use Trusted Publishers. PyPI supports publishing via OIDC tokens instead of static API keys. If litellm had used it, stolen CI credentials wouldn’t have been enough.
  • Question every dependency. Before adding a package, ask: can an LLM write this in 10 minutes? If yes, skip the package and the trust relationship that comes with it.
  • Watch for .pth files. The .pth trick bypasses normal imports entirely. Monitoring for unexpected .pth files in your site-packages is a cheap win.

The Uncomfortable Part

The attack was live for an hour. The credential chain behind it took four weeks to build across five ecosystems. It was caught because the attacker’s code had a bug.

Without that bug, this post might not exist yet. The litellm attack didn’t change the math on dependencies. It just made it impossible to ignore.