A significant supply chain attack has sent shockwaves through the AI industry, leading Meta to halt its work with AI data startup Mercor. This breach, executed through a compromised version of the open-source LiteLLM library, has potentially exposed the highly sensitive training methodologies behind leading large language models, not just personal data. This incident has sparked investigations at other major AI players like OpenAI and Anthropic, and has resulted in a class action lawsuit involving over 40,000 individuals.

Mercor, a rapidly growing company founded by young entrepreneurs, specialises in creating bespoke training datasets for AI giants. Its business model, focused on generating fine-tuning and reinforcement learning data, has made it a critical, yet now vulnerable, link in the AI supply chain. The attack, orchestrated by a group identified as TeamPCP, exploited credentials obtained from a security scanner to inject malicious code into the LiteLLM library. This poisoned package was available for a short period, but its sophisticated payload was designed to harvest extensive sensitive information, including API keys, cloud credentials, and other secrets, exfiltrating them to a remote server.
The exposure of approximately four terabytes of data from Mercor includes source code, user databases, and personal verification documents. However, the most alarming aspect for companies like Meta is the potential leak of proprietary data selection criteria, labelling protocols, and training strategies. These methodologies represent significant intellectual property and competitive advantages that companies have invested billions in developing and keeping secret. The interconnected nature of the AI industry, with multiple competitors relying on the same data vendors, means that a single breach can have widespread implications, exposing the carefully guarded secrets of many. The fallout also includes a class action lawsuit alleging inadequate cybersecurity at Mercor and the claim of responsibility by the threat group Lapsus$, potentially in collaboration with TeamPCP, who began auctioning the stolen data on the dark web. This event serves as a stark warning about the systemic risk s inherent in the AI supply chain, highlighting the fragility of the infrastructure upon which the industry's rapid advancements have been built.
Fuente Original: https://thenextweb.com/news/meta-mercor-breach-ai-training-secrets-risk
Artículos relacionados de LaRebelión:
- OCSF The Security Data Language Teams Need
- ARM Lanza Chip IA Revolucionario con Meta como Cliente Estrella
- Meta invierte 27 mil millones en IA un acuerdo masivo
- OpenAI Launches GPT-54 for Advanced Knowledge Work
- Amazons 337 Billion Spain Boost AI Data Centres
Artículo generado mediante LaRebelionBOT











