Mercor Hit by LiteLLM Supply Chain Attack
startup
| Source: Mastodon | Original article
Mercor, the Stockholm‑based AI recruiting platform that matches candidates with jobs using large language models, confirmed on March 31 that it fell victim to the massive LiteLLM supply‑chain breach that has been rippling through the AI industry. The compromise originated in the open‑source LiteLLM library – a cost‑management wrapper that many firms adopt to route requests to inexpensive commercial LLM providers. Hackers injected malicious code into a recent LiteLLM release, which was then propagated to downstream users, including Mercur’s hiring pipeline.
The attackers claim to have exfiltrated roughly 4 terabytes of data, encompassing Mercor’s source code, internal databases and, crucially, personal information of thousands of job seekers. Portions of the stolen material have already surfaced on dark‑web forums, prompting immediate concerns over identity theft and the misuse of proprietary recruitment algorithms. Mercor’s security team is working with law‑enforcement and has begun notifying affected users under GDPR’s breach‑notification requirements.
The incident matters because it underscores how quickly a single compromised open‑source component can jeopardise entire AI stacks. LiteLLM’s popularity stems from its ability to switch between providers such as OpenAI, Anthropic and Cohere, offering cost savings that many startups chase. Yet the attack reveals a trade‑off: the more “inexpensive commercial options” a company integrates, the larger its attack surface becomes. The breach also follows a string of recent AI‑related supply‑chain incidents, including the Trivy compromise that paved the way for the LiteLLM injection.
What to watch next: patches to the LiteLLM repository are expected within days, and security researchers will likely audit other dependencies that interact with it. Regulators may issue guidance on third‑party risk management for AI services, and additional firms are expected to disclose similar breaches as the fallout spreads. Companies that rely on LiteLLM should audit their implementations, rotate credentials and consider hardened, vetted alternatives while the industry grapples with the broader implications of AI supply‑chain security.
Sources
Back to AIPULSEN