LiteLLM Compromise Exposes Risks in AI Supply Chains

Summary: In a detailed analysis, Trend Micro Research revealed that the criminal group TeamPCP compromised LiteLLM, a popular Python package for AI services. This compromise involved a sophisticated multi-ecosystem supply chain attack with versions 1.82.7 and 1.82.8 containing malicious code that deployed a three-stage payload to steal sensitive data and establish persistent backdoors.

Trend Micro Research has uncovered one of the most sophisticated multi-ecosystem supply chain campaigns publicly documented, targeting LiteLLM, a widely-used Python package for AI services. Versions 1.82.7 and 1.82.8 contained malicious code that deployed a three-stage payload: credential harvesting, Kubernetes lateral movement, and persistent backdoor for remote code execution. The attack targeted cloud credentials, SSH keys, and Kubernetes secrets, leading to sensitive data being stolen and encrypted before exfiltration.

The campaign spanned various ecosystems including PyPI, npm, Docker Hub, GitHub Actions, and OpenVSX. TeamPCP's tactics involved leveraging compromised CI/CD pipelines and security scanners like Trivy to escalate privileges and propagate malicious payloads. The attack began with production systems running LiteLLM crashing due to out-of-memory (OOM) errors, pointing to the compromised package.

The technical payload included a credential harvester targeting over 50 categories of secrets, a Kubernetes toolkit for cluster compromise, and a persistent backdoor enabling ongoing remote code execution. This sophisticated attack underscores the need for enhanced security measures in AI proxy services and emphasizes the critical importance of monitoring supply chain dependencies to protect sensitive data and operational integrity.

The compromised LiteLLM versions were deployed on major platforms, indicating the scale and sophistication of this multi-ecosystem attack. The incident highlights the risks associated with relying on third-party tools for AI infrastructure and the need for robust security practices to safeguard sensitive information.

Key facts

  • LiteLLM, a popular Python package for AI proxy services, was compromised on PyPI.
  • Two versions of LiteLLM contained malicious code deploying a three-stage payload: credential harvesting, Kubernetes lateral movement, and persistent backdoor for remote code execution.
  • The attack targeted cloud credentials, SSH keys, and Kubernetes secrets, leading to sensitive data being stolen and encrypted before exfiltration.

Why it matters

This incident underscores the critical importance of monitoring supply chain dependencies in AI infrastructure to prevent data theft and maintain operational integrity.

Embedded content for: LiteLLM Compromise Exposes Risks in AI Supply Chains