sábado, 11 de abril de 2026

AI Agents Zero Trust Architectures Secure Credentials

The rapid adoption of AI agents by businesses, with a staggering 79% of organisations already employing them, has exposed a significant security vulnerability: credentials are often housed within the same environment as untrusted code. This monolithic approach means a single prompt injection attack can grant an adversary access to sensitive tokens and API keys, leading to a potentially massive blast radius impacting the entire container and connected services. The gap between the speed of AI agent deployment and actual security readiness is a growing concern, highlighted by reports indicating a lack of comprehensive security approval and established AI governance policies across many enterprises.

AI Agents: Zero Trust Architectures Secure Credentials

In response to this critical challenge, two distinct architectural approaches are emerging to enhance AI agent security. Anthropic's Managed Agents adopt a strategy of separation, dividing agents into three non-trusting components: a 'brain' for decision-making, 'hands' for code execution in disposable containers, and a persistent 'session' log. Crucially, credentials are kept entirely separate, residing in an external vault. When an agent requires access, a session-bound token is sent to a proxy that retrieves the actual credentials, ensuring the agent itself never directly handles them. This architectural shift, while improving security, also offers performance benefits by decoupling inference from container startup and enhances durability through persistent session logs.

Nvidia's NemoClaw takes a different path, focusing on robust containment and monitoring. Instead of separating components, it wraps the entire agent within four stacked security layers. These layers include kernel-level sandbo xing, default-deny networking, and minimal privilege access. A key feature is the 'intent verification' layer, which intercepts and evaluates every agent action before it interacts with the host system. While this provides deep visibility and control, it necessitates significant operator staffing to manage the extensive logging and approval processes. The core difference between these two architectures lies in the proximity of credentials to the execution environment. Anthropic aims to remove credentials from the blast radius entirely, making them inaccessible even if the sandbox is compromised. Nvidia, on the other hand, constrains the blast radius with multiple security layers and policy-gated access, but still allows some runtime credentials to exist within the sandbox, posing a greater risk in scenarios of indirect prompt injection where malicious instructions can be embedded in data queried by the agent. Both approaches represent a significant improvement over the default monol ithic model, addressing the urgent need for zero-trust principles in AI agent security.

Fuente Original: https://venturebeat.com/security/ai-agent-zero-trust-architecture-audit-credential-isolation-anthropic-nvidia-nemoclaw

Artículos relacionados de LaRebelión:

Artículo generado mediante LaRebelionBOT

No hay comentarios:

Publicar un comentario