I've been following RSA Conference coverage for years, and there's a reliable pattern. Every year has a buzzword. Last year it was "zero trust." The year before, "supply chain security." You could always tell what the industry had decided to care about by counting booth banners.
This year's buzzword is "agentic AI security." And unlike most RSAC themes, this one isn't manufactured hype — it's a response to a real problem that's been building for months.
Roughly 40% of RSAC 2026's agenda was AI-weighted. NVIDIA announced NemoClaw. Cisco launched DefenseClaw. SentinelOne shipped AI agent red-teaming. Zenity presented on shadow AI governance. NeuralTrust, NSFOCUS, SecurityScorecard — nearly every major security vendor had something to say about securing AI agents.
Here's what happened, what it means, and why it matters if you're running OpenClaw.
The Big Announcements
NVIDIA NemoClaw
Announced at GTC the week before RSAC, NemoClaw is NVIDIA's play for making OpenClaw enterprise-ready. It runs on their new OpenShell secure runtime and adds three layers of security: input guardrails that filter dangerous instructions, sandboxed execution that isolates skill operations, and monitoring that tracks agent behavior over time.
The pitch is straightforward — install NemoClaw, get a single-command deployment of OpenClaw with security guardrails baked in. It also integrates Groq for faster inference, which is a nice touch for latency-sensitive workloads.
The reality is more nuanced. The New Stack published a detailed technical analysis with a blunt headline: "Nvidia's NemoClaw has three layers of agent security. None of them are enough." Their point — and it's a fair one — is that each individual layer has known bypass techniques. Guardrails can be circumvented through sophisticated prompt injection. Sandboxes have escape vectors. Monitoring catches things after they happen, not before.
That doesn't make NemoClaw useless. It makes it necessary but insufficient. More on that later.
Cisco DefenseClaw
If NemoClaw is about securing the agent's runtime, DefenseClaw is about securing the agent's relationship with everything around it. Cisco's framework takes a zero-trust approach to AI agent security, which in practice means:
- AI Bill of Materials (ABOM) — an inventory of every model, skill, and data source an agent uses. Think SBOM but for AI components.
- Sandbox scanners — automated testing of skills and extensions before they're allowed to run in production.
- Code-guard tools — runtime protection that monitors agent-generated code before execution.
DefenseClaw is built on top of NVIDIA's OpenShell runtime, which creates an interesting dependency chain: OpenClaw running inside OpenShell, secured by NemoClaw guardrails, governed by DefenseClaw policies. Three layers of abstraction, three different vendors, three different failure modes.
ZDNET's coverage raised the question nobody at the booth demo wants to answer: "Who sees the alerts when something goes wrong at 2 a.m.?" DefenseClaw can generate telemetry, but someone still needs to be on-call for an AI agent security incident. Most organizations aren't staffed for that.
SentinelOne: Agent Red-Teaming
SentinelOne used RSAC to announce AI agent red-teaming capabilities built into their Purple AI platform. The idea is to proactively test AI agents for prompt injection vulnerabilities, data exfiltration paths, and privilege escalation before attackers find them.
This is genuinely useful. Most organizations running OpenClaw have never tested whether their agent can be tricked into doing something harmful. SentinelOne's approach treats the agent like any other attack surface and applies offensive security methodology to it.
Zenity: Shadow AI Governance
Zenity presented on what might be the most underappreciated problem in the entire space: AI agents deployed without IT's knowledge. Their research suggests that OpenClaw instances are proliferating inside enterprises the same way personal Dropbox accounts did a decade ago — developers install them because they're useful, IT has no visibility, and nobody thinks about the security implications until something goes wrong.
The parallel to shadow IT is apt. And just like shadow IT, the solution isn't banning the tools — it's providing secure alternatives that offer the same utility with proper governance.
Everything Else
NSFOCUS announced a multi-layer defense system specifically for OpenClaw deployments. SecurityScorecard integrated agent risk scoring into their platform. NeuralTrust launched a lifecycle security approach for AI agents. Adam Bluhm presented "Hello, DarkClaw!" — a demonstration of turning OpenClaw against its owner through manipulation techniques.
The sheer volume of announcements tells a story: the security industry has collectively decided that AI agent security is a market worth investing in. When that many vendors pivot simultaneously, it means the problem is real and the customers are willing to pay.
What RSAC Got Right
The conference correctly identified the fundamental challenge: AI agents operate as autonomous entities with credentials, permissions, and the ability to take actions. They're not just software — they're non-human identities that need to be governed like any other identity in your organization.
Oasis Security's framing resonated throughout the conference: AI agents are the new non-human identities. They authenticate, they hold secrets, they make decisions, and they act. The security frameworks we have for human users and service accounts need to extend to agents.
Microsoft's pre-RSAC blog post on running OpenClaw safely laid out the three components of the agent security boundary — identity (the tokens the agent uses), execution (the tools it can run), and persistence (the ways it keeps changes across runs). Every vendor at RSAC was essentially selling a solution for one or more of these components.
What RSAC Got Wrong
Here's what bothered me about the conference coverage: almost every announcement focused on securing agent behavior — what the agent does, what it's allowed to do, how to detect when it does something wrong. That's important work. But it skips a more basic question: where is the agent running, and who controls the infrastructure?
NemoClaw secures the runtime. DefenseClaw governs agent policies. SentinelOne tests for vulnerabilities. But none of them solve the problem that 220,000 OpenClaw instances are exposed to the public internet right now with no authentication, no SSL, and no firewall. You can put the most sophisticated guardrails in the world inside an agent — if the server it runs on is accessible to anyone with a port scanner, none of that matters.
It's like putting a state-of-the-art alarm system inside a house with no front door. The alarm is great, but maybe start with the door.
The deployment layer — the infrastructure that actually runs the agent — is the foundation everything else sits on. Get that wrong and everything built on top is compromised regardless of how sophisticated it is.
What This Means for OpenClaw Users
If you're running OpenClaw, RSAC 2026 should tell you two things:
First, the threat model is real. When NVIDIA, Cisco, SentinelOne, and a dozen other security vendors build products around a specific threat, that threat is real. The security industry doesn't invest in phantom problems — there's too much money on the line.
Second, the solution is layered. There's no single product that solves AI agent security. You need secure infrastructure (the deployment layer), runtime guardrails (NemoClaw), governance policies (DefenseClaw or similar), and monitoring (SIEM integration, anomaly detection). Skipping any layer leaves gaps.
For most people running OpenClaw, the practical starting point is the infrastructure layer — because that's where the most basic failures are happening right now. You can't deploy NemoClaw guardrails on an instance that's publicly accessible with no authentication. You can't enforce DefenseClaw policies on an agent running on someone's laptop. The infrastructure has to be right first.
That's what we've been building at Clawdy since before RSAC made agent security trendy. Isolated cloud instances, authentication proxy in front of every request, API key isolation, managed updates. It's not as exciting as a three-layer guardrail system with AI-powered anomaly detection. But it's the foundation that makes everything else possible.
The Trajectory
RSAC 2026 marks the moment AI agent security went from "interesting research topic" to "funded product category." Enterprise budgets are being allocated. Vendor roadmaps are being built. Compliance frameworks will follow.
For OpenClaw specifically, the next six months will see a rapid maturation of the security ecosystem around it. NemoClaw will ship production-ready. DefenseClaw will get enterprise adoption. More vulnerabilities will be discovered and patched. The tools will get better.
But the gap between "tools available" and "tools deployed" will remain wide. The majority of OpenClaw instances are run by individuals and small teams who don't have security budgets, don't attend RSAC, and won't deploy NemoClaw or DefenseClaw.
For those users, the most impactful security improvement isn't a product announcement at a conference. It's getting their OpenClaw instance off their laptop and onto isolated infrastructure with basic security defaults. Everything else is built on top of that.
The security industry is building layers. Start with the foundation. Clawdy deploys OpenClaw on isolated infrastructure with authentication, SSL, and network isolation — in under 60 seconds. Get started at clawdy.app.