viernes, 15 de mayo de 2026

Debug AI Agents Locally with Raindrops Workshop

Raindrop AI has launched Workshop, an open-source debugging tool that's set to transform how developers work with AI agents. Released under the MIT License, this innovative platform allows developers to debug and evaluate AI agents entirely on their local machines, addressing longstanding privacy concerns about sending sensitive trace data to external servers.

Debug AI Agents Locally with Raindrop's Workshop

Workshop operates as a local daemon with an intuitive user interface, streaming every token, tool call, and decision made by an AI agent directly to a local dashboard, typically accessible at localhost:5899. All this information is stored in a single, lightweight SQL database file that consumes minimal memory. This real-time telemetry approach eliminates the delays associated with traditional polling methods, giving developers immediate visibility into their agents' behaviour, including errors and decision-making processes.

One of Workshop's most impressive features is its "self-healing eval loop." This functionality enables coding agents like Claude Code to autonomously read traces, write evaluations against the codebase, and fix broken code without human intervention. For instance, if a veterinary assistant agent fails to ask necessary follow-up questions, Workshop captures the complete trajectory. Claude Code can then analyse this trace, write a specific evaluation, identify the logic error in the prompt or code, and re-run the agent until all assertions pass successfully.

The tool demonstrates remarkable versatility in its compatibility, supporting multiple programming languages including TypeScript, Python, Rust, and Go. It integrates seamlessly with popular SDKs and frameworks such as Vercel AI SDK, OpenAI, Anthropic, LangChain, LlamaIndex, and CrewAI. Workshop also works with various coding agents, including Claude Code, Cursor, Devin, and OpenCode. Installation is straightforward across macOS, Linux, and Windows platforms, requiring just a single shell command that handles binary placement and PATH configuration automatically. For developers who prefer building from source, the repository is available on GitHub and utilises the Bun runtime.

By providing a "sane" approach to debugging agents locally whilst maintaining data sovereignty, Workshop represents a significant step forward in making AI agent development more accessible and secure for the developer community.

Fuente Original: https://venturebeat.com/technology/developers-can-now-debug-and-evaluate-ai-agents-locally-with-raindrops-open-source-tool-workshop

Artículos relacionados de LaRebelión:

Artículo generado mediante LaRebelionBOT

No hay comentarios:

Publicar un comentario