Mar 23, 2026
NVIDIA NemoClaw: Secure AI Agents
AI agents can automate complex tasks and work for you around the clock. But running autonomous agents comes with serious security risks. They can leak credentials, access unauthorized systems, or run uncontrolled code. NVIDIA launched NemoClaw at GTC 2026 to solve this problem with an open-source security stack for AI agents.
What is NemoClaw?
NemoClaw is an open-source reference stack from NVIDIA that makes running OpenClaw always-on assistants safe. OpenClaw is an AI agent framework that lets agents perform tasks autonomously. NemoClaw wraps it with enterprise-grade security, privacy, and policy controls.
The project was announced on March 16, 2026 at GTC. NVIDIA describes it as an "operating system for personal AI." The stack combines OpenClaw's agent capabilities with OpenShell's sandboxing technology and NVIDIA's Nemotron inference models. You can deploy secure agents across cloud, on-premise, RTX PCs, and DGX Spark environments.
NemoClaw is licensed under Apache 2.0 and currently in alpha status. Early adopters include Box, Cisco, Atlassian, and Salesforce. You can find it on GitHub at https://github.com/NVIDIA/NemoClaw where it has already earned over 12,000 stars.
Key Features
Sandboxed Execution with OpenShell: Every OpenClaw agent runs inside an OpenShell sandbox. The sandbox uses Landlock, seccomp, and network namespace isolation at the kernel level. Filesystem access is restricted to /sandbox and /tmp for writes. No access is granted by default. This follows a zero-trust security model where the agent must earn every permission.
Inference Routing: Model calls from the agent never leave the sandbox directly. OpenShell intercepts all inference requests and routes them to configured providers. This means API keys stay outside the sandbox where the agent cannot access them. You can switch models at runtime without restarting the agent. The default provider is NVIDIA Cloud running Nemotron 3 Super 120B.
Declarative Network Policy: Egress rules are defined in simple YAML files. When an agent tries to reach an unknown host, the request gets blocked. The blocked request appears in the terminal for the operator to approve or deny. Approved endpoints last for the session but do not get saved to the baseline policy. This keeps the security posture tight.
Blueprint Lifecycle Management: NemoClaw uses versioned Python artifacts called blueprints for all orchestration logic. Each blueprint goes through four stages: resolve, verify, plan, and apply. The verify step checks artifact digests to prevent tampering. This approach ensures reproducible and secure deployments every time.
Single CLI Interface: The nemoclaw command manages the entire stack. One tool handles the gateway, sandbox, inference provider, and network policy. An interactive onboarding wizard guides new users through setup. You can check status, stream logs, and connect to sandboxes from a single command line.
How It Works
NemoClaw follows a thin-plugin, versioned-blueprint architecture. The TypeScript plugin handles user interaction and CLI commands. The Python blueprint contains the actual orchestration logic. This separation keeps the user-facing tool lightweight while the heavy work lives in versioned, verifiable artifacts.
The security model has four protection layers. The network layer blocks unauthorized outbound connections and can be updated at runtime. The filesystem layer locks down reads and writes at sandbox creation. The process layer blocks privilege escalation and dangerous system calls. The inference layer reroutes all model API calls through controlled backends.
When you launch NemoClaw, the CLI resolves the correct blueprint version. The blueprint digest gets verified against an expected value. Then the system plans which OpenShell resources to create. Finally, it applies the plan by running OpenShell CLI commands. The agent starts inside the sandbox with all security controls active from the first moment.
Network policies define exactly which endpoints the agent can reach. The baseline policy includes access to Anthropic API, NVIDIA inference endpoints, GitHub, npm registry, and a few other services. Each policy entry specifies the protocol, port, and even which binary can make the connection. Anything not on the list gets blocked.
Getting Started
- Install NemoClaw by running:
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash - Run the onboarding wizard:
nemoclaw onboard - Get an NVIDIA API key from build.nvidia.com for inference routing
- Connect to your assistant:
nemoclaw my-assistant connect - Launch the OpenClaw TUI:
openclaw tui
Your system needs at least 4 vCPU, 8 GB RAM, and 20 GB of disk space. The sandbox image is about 2.4 GB compressed. Linux with Docker is the primary supported platform. macOS works experimentally through Colima or Docker Desktop. Windows WSL is supported with Docker Desktop's WSL backend.
Conclusion
NemoClaw brings production-ready safety to autonomous AI agents. The zero-trust sandbox model, transparent inference routing, and declarative policies give operators real control over what agents can do. The architecture is clean with its separation of plugin and blueprint. Supply-chain safety through digest verification adds another layer of protection.
The project is still in alpha, so interfaces and APIs will change. Gartner recommends waiting for more maturity before full enterprise adoption. But the direction is solid. For teams exploring autonomous AI agents, NemoClaw offers a strong security foundation backed by NVIDIA's open-source commitment.
Check out NVIDIA NemoClaw on GitHub at https://github.com/NVIDIA/NemoClaw to start securing your AI workflows today.