Artificial intelligence is revolutionising the landscape of cybersecurity vulnerability research and exploitation. A recent experiment demonstrates how AI language models can now autonomously develop exploits for publicly disclosed Chrome vulnerabilities, raising serious concerns about the security of Electron-based applications.

Security researcher s1r1us conducted a compelling study using Claude Opus to create a functional exploit for a Chrome CVE without any public proof-of-concept code. The investigation revealed a critical security gap: whilst Google Chrome promptly patches vulnerabilities, popular Electron-based applications like Discord, Slack, Cursor, and Claude Desktop embed outdated Chrome versions that remain vulnerable for extended periods. At the time of writing, Chrome had reached version 147, yet many widely-used Electron applications still run significantly older versions, leaving millions of installations exposed.
The methodology involved selecting a high-severity CVE from Chrome's recently patched vulnerabilities and directing Claude Opus to analyse and exploit it through iterative prompting. The AI model successfully developed read and write primitives within Chrome's sandbox. To achieve full exploitation, the researcher then pointed Claude towards a known V8 sandbox bypass documented in Chromium's issue tracker, enabling the creation of a complete working exploit chain.
The economics of this approach prove particularly concerning. The entire exercise required approximately 20 hours of human supervision and roughly 2,000 USD in API tokens. Considering that bug bounty programmes typically pay around 10,000 USD for such exploits, the process remains financially viable. More troubling is the trajectory of AI capability improvements—future models may require even less human intervention, potentially automating the entire exploit development process.
This development highlights fundamental questions about vulnerability disclosure practices and software supply chain security. The substantial time lag between component updates and their integration into dependent applications creates exploitable windows that AI-assisted attackers can now leverage efficiently. As AI models become more sophisticated, the industry may need to reconsider how quickly security information is published and whether current patching timelines remain adequate in an era of AI-accelerated exploit development.
Fuente Original: http://www.elladodelmal.com/2026/04/como-crear-un-exploit-1-day-sobre-un.html
Artículos relacionados de LaRebelión:
- AI Coding Surge Cursor Valued 50B
- OpenAI Codex Desafia a Claude Code
- Claude Managed Agents Solucion Unica o Riesgo
- Fuga de Codigo de Claude Modo Sigiloso y Deteccion de Frustracion
- Claude Pago Extra para Herramientas de Terceros
Artículo generado mediante LaRebelionBOT












