The cutting edge of artificial intelligence development, once hailed as a beacon of rapid innovation, is now facing a significant threat. Researchers have uncovered a widespread issue where the very platforms designed to accelerate AI progress are being systematically exploited by malicious actors. Hugging Face, a colossal repository hosting over a million machine learning models, and ClawHub, the central hub for OpenClaw's AI agent skills, have both been found to harbour hundreds of compromised entries.

These malicious models and skills are not mere nuisances; they are designed to inflict serious damage. Attackers are leveraging the inherent trust developers place in these central repositories to inject malware that can steal credentials, establish covert backdoors into systems, and even hijack AI agents for illicit purposes like cryptocurrency mining. The techniques vary, but the core logic remains the same: exploit the AI industry's own infrastructure to compromise it. For instance, on Hugging Face, a technique called 'nullifAI' bypasses existing security measures by embedding malicious Python code within models, which executes upon loading due to the vulnerabilities in Python's pickle serialisation format. This can grant attackers direct control over a user's machine.
Similarly, ClawHub has been infiltrated by a coordinated campaign, with a significant portion of its AI agent skills identified as malicious. These compromised skills can access sensitive data and internal networks within enterprise environments, as AI agents autonomously select and execute these tools as part of their workflows. The implications are far-reaching, impacting not just individual developers but potentially entire organisations. These attacks are part of a broader trend of software supply chain compromises, seen recently in packages like LiteLLM and PyPI, highlighting the growing vulnerability of the interconnected digital ecosystem. The speed at which these attacks can be executed, often within hours or even minutes, presents a formidable challenge for defenders. The AI industry's substantial investment in model development appears to have outpaced its investment in securing the distribution channels, leaving these critical repositories as the new prime target in sof tware security.
Fuente Original: https://thenextweb.com/news/hugging-face-clawhub-malware-ai-supply-chain
Artículos relacionados de LaRebelión:
- Malware Snow Teams y Email Roban Active Directory
- Google Uncovers AI Prompt Injection Attacks Online
- Iran Targets OpenAIs 30bn AI Hub Stargate Under Threat
- Malware Autopropagable Ataca Software de Codigo Abierto
- Hack de Trivy Propaga Malware via Docker
Artículo generado mediante LaRebelionBOT
No hay comentarios:
Publicar un comentario