A fake repository on Hugging Face has become one of the clearest examples yet of how the AI software supply chain is turning into a frontline security problem. Researchers found that a malicious project impersonated OpenAI's privacy filter, copied the documentation closely enough to appear legitimate, and attracted more than 244,000 downloads before the campaign was exposed.
The attack worked because it looked credible. The repository used the name "Open-OSS/privacy-filter" to imitate the legitimate "openai/privacy-filter" project and mirrored the expected documentation, examples, and tone. For many developers, that was enough to create the impression that the project was safe, especially on a platform where popularity often gets mistaken for trustworthiness.
According to the reporting, the payload included a Rust-based infostealer. The malicious chain disabled SSL checks, reconstructed hidden infrastructure from encoded fragments, and used PowerShell to fetch additional code. Once running, it could steal browser credentials, authentication tokens, Discord data, and cryptocurrency wallet information, turning what looked like an AI utility into a broad credential theft operation.
The deeper lesson is not limited to one fake repository. Attackers are following developer attention into AI tooling, model hubs, and fast-moving open source ecosystems where users often install first and inspect later. That makes platforms like Hugging Face attractive targets for malware operators trying to exploit trust, virality, and the pressure to move quickly.
The incident also reinforces a practical point for security teams: AI infrastructure now needs the same supply-chain scrutiny already applied to package registries and developer dependencies. If a project becomes popular unusually quickly, mimics a well-known vendor, or asks users to run local scripts without a clear review path, it should be treated as a potential compromise point rather than a harmless convenience.