What Happened?

On April 7, 2026, 360 Digital Security Group announced that its AI vulnerability discovery agent found three more high-value security vulnerabilities in OpenClaw — 1 critical and 2 medium. All vulnerabilities have been patched and publicly disclosed by the OpenClaw team.

This isn’t the first time 360 has found issues in OpenClaw. Back in late March, 360 already disclosed a MEDIA protocol vulnerability affecting 170,000+ instances worldwide, which was confirmed by the China National Vulnerability Database (CNNVD).

Here’s the kicker: 360 used AI to find AI vulnerabilities — a multi-agent collaboration system that combines attack surface analysis, AI code auditing, and dynamic penetration testing to automatically discover security issues.

Three Vulnerabilities Hit the AI Core

According to reports, these three newly discovered vulnerabilities directly target the core operational mechanisms of AI agents, directly affecting the core security of user devices, data, and accounts.

Known Critical CVEs

CVE IDSeverityTypeAffected VersionsFixed Version
CVE-2026-344255.3 (Medium)Shell-bleed protection bypass< 2026.4.22026.4.2
CVE-2026-344267.6 (High)Authorization bypass (env var normalization)< 2026.4.22026.4.2
CVE-2026-345038.1 (High)WebSocket session termination incomplete< 2026.4.22026.4.2

Note: These are NVD scores. 360’s original report called it “1 High + 2 Medium” — the difference comes from different scoring standards between CNNVD and NVD.

Critical: MEDIA Protocol Prompt Injection Bypass

This is the most dangerous one. OpenClaw’s MEDIA protocol runs on the output post-processing layer, positioned after the platform’s tool security policy control. This means:

Even if an admin has explicitly disabled all tool calls, attackers can still exploit this vulnerability using only basic group chat member permissions — no special authorization needed — to directly steal sensitive local files from the server.

Technical characteristics:

  • Extremely low attack threshold: Only requires group chat member permissions
  • Extremely wide impact scope: 50+ countries, 170,000+ instances worldwide
  • Bypasses all defenses: Tool policy controls completely fail

Medium: WebSocket Self-Connection + Missing Token Binding

The WebSocket protocol’s self-connection mechanism combined with lack of token binding verification allows browsers to hijack local OpenClaw instances. Revoked tokens may remain valid, creating persistent unauthorized access risks.

Bonus: SSH Command Injection (Known Old Vulnerability)

It’s worth noting that OpenClaw was already exposed to an SSH command injection vulnerability back in February 2026 (CVE-2026-25157, CVSS 7.8 High), fixed in version 2026.1.29. That vulnerability allowed attackers to inject SSH options (like -oProxyCommand) via hostnames starting with hyphens to execute local commands. While not part of these three new vulnerabilities discovered by 360, it also reflects systemic input validation issues in OpenClaw.

It’s Not Just Vulnerabilities: The Supply Chain Is Already Compromised

Beyond these three new vulnerabilities, the bigger threat to the OpenClaw ecosystem comes from supply chain attacks.

After scanning the ClawHub marketplace (OpenClaw’s official plugin marketplace), security researchers found:

  • 340+ malicious Skills plugins (infection rate ~10.8% out of 3,016 samples)
  • 7.1% contain plaintext credential leaks
  • Many instances exposed to the public internet, becoming easy targets for attackers

The attack chain is clear: Base64 encode → decode → curl download → execute malicious payload → establish persistent backdoor

This is like the npm or PyPI supply chain poisoning incidents, except happening in the AI Agent world.

Four Layers of Attack Surface for AI Agents

Based on analysis from 360 and NSFOCUS, OpenClaw’s architecture has four layers of attack surface:

LayerAttack VectorRisk
Entry LayerPrompt injection (direct/indirect), API gateway auth bypassRemote code execution
Decision LayerLLM logic manipulation, memory poisoningAI decision tampering
Execution LayerTool privilege escalation, running as rootData theft, system control
Ecosystem LayerClawHub supply chain poisoning, unsigned pluginsLarge-scale backdoor implantation

Enterprise Self-Protection Guide

If you’re using OpenClaw (or any AI Agent framework) in production, here are the security measures you must implement:

Immediate Actions

  1. Upgrade to 2026.4.2+ — patch all known CVEs
  2. Use Docker containerization — never run directly on bare metal
  3. Use non-root users — restrict container privileges, enable no-new-privileges
  4. Disable public internet exposure — use reverse proxy + IP whitelist
  5. Audit ClawHub plugins — remove all unaudited third-party Skills

Long-term Protection

  • Encrypt API keys — use .env management, never hardcode
  • Network isolation — use iptables to restrict outbound connections
  • Redline rules — set up an absolute deny list
  • Automated monitoring — regularly scan configuration fingerprints, detect anomalies
1
2
3
4
5
[User]  [Reverse Proxy(TLS)]  [Docker Container(non-root)]
                                    
                              [Least-privilege tools]
                              [Network isolation]
                              [Encrypted keys]

Using AI to Find AI Vulnerabilities

The most notable thing about 360’s discovery isn’t just the vulnerabilities themselves — it’s the method they used to find them.

They used a “multi-agent collaborative vulnerability discovery system” — multiple AI agents each handling attack surface analysis, code auditing, and dynamic penetration testing, working together to automatically discover security issues.

This hints at a trend: AI Agent security issues will ultimately be solved by AI Agents. Both attackers and defenders are using AI, and the security field has officially entered the Agent vs Agent era.

Conclusion

OpenClaw’s explosive growth (170,000+ instances worldwide) proves that the AI Agent era has arrived. But speed brings risk — when a vulnerable version gets deployed by tens of thousands within weeks, “homogenized assets” become ideal targets for mass attacks.

For teams building AI Agents, security isn’t an afterthought — it’s the first priority in architecture design. After all, how autonomous your AI Agent is determines how much damage is done when it’s compromised.


Sources: BlockBeats觀察者網數說安全綠盟科技CN-SEC