OpenClaw Log Poisoning Enables Indirect Prompt Injection Risk
Key Takeaways
- OpenClaw logged unfiltered WebSocket header values (Origin, User-Agent) in pre-connect paths.)
- Crafted external input could be recorded into logs.
- Logs may be consumed by LLM-assisted agent workflows, creating an indirect prompt injection risk.
- A patch was released to sanitize/truncate header values (OpenClaw ≥ 2026.2.13).
Log Poisoning in OpenClaw Creates Indirect Prompt Injection Risk
Security analysts identified a log poisoning vulnerability in the open-source OpenClaw AI assistant that allowed crafted WebSocket header values to be written directly into the system logs. Because these logs may be later read or interpreted via AI-assisted debugging or reasoning, the injected content could manipulate an agent’s context or guidance. This constitutes an indirect prompt injection risk, rather than a classic code execution or immediate takeover. Responsible disclosure led to a fix in the OpenClaw codebase. (Eye Research)
What We Know
Security researchers analyzing OpenClaw found that the WebSocket handler logged the Origin and User-Agent header values without normalization or truncation when connections closed early. Unauthenticated clients could send crafted WebSocket requests with large headers that would be written into logs. If later the agent ingested those logs during AI-assisted debugging or analysis workflows, the crafted entries could be interpreted as contextual instructions, affecting the agent’s suggestions or diagnostic output. This risk does not directly execute arbitrary commands, but it could influence the agent’s reasoning or prompt interpretations if logs are treated as trusted context.
How the Breach Happened
OpenClaw’s logging implementation wrote unfiltered WebSocket header fields into log messages when connections terminated before completing the handshake. These header values included potentially attacker-controlled fields such as Origin and User-Agent, and OpenClaw did not impose length limits or sanitization. An unauthenticated attacker could send a WebSocket handshake that triggered this code path, embedding crafted content into the agent’s log files. If those logs were later referenced in an AI reasoning or debugging operation, that content could be inadvertently included in the model’s context and influence its outputs or actions. (GitHub)
Why It Matters
Unlike many vulnerabilities that target direct code execution, this issue exemplifies how AI agents change the threat model: internal logs become part of an agent’s reasoning context and therefore a new attack surface. Poisoned logs could subtly manipulate an AI assistant’s interpretation of its environment, suggestions to users, or automated remediation steps. Even if the crafted headers do not directly cause unauthorized actions, they can degrade trust in agent output or lead to incorrect decisions. This extends the classical concept of log poisoning into the realm of indirect prompt injection in autonomous AI workflows.
PointGuard AI Perspective
The OpenClaw log poisoning case reinforces the need for AI-centric security controls that account for how autonomous agents consume and reason over internal artifacts such as logs. Traditional application security focuses on memory safety, input sanitization, and access control. But when logs become AI context, untrusted data can influence agent behavior even without classical exploitable vulnerabilities like RCE or privilege escalation.
PointGuard AI addresses these risks by providing visibility into how AI components process contextual data, detecting when untrusted inputs could enter reasoning contexts, and enforcing data sanitization and contextual integrity policies before AI workflows incorporate that data. Additionally, PointGuard AI’s monitoring and risk scoring help identify emergent threat surfaces (like logs influencing LLM prompts) that are invisible to legacy SIEM and EDR systems.
By expanding threat models to include AI reasoning pipelines, PointGuard AI helps organizations preempt exploitation chains that leverage indirect prompt injection via surrogate data sources (logs, caches, telemetry) — reducing operational risk and improving trust in AI-assisted automation.
Incident Scorecard Details
Total AISSI Score: 6.2/10
- Criticality = 6 — Not direct execution, but impacts agent reasoning context (25%).
- Propagation = 5 — Requires an exposed WebSocket gateway; not universal (20%).
- Exploitability = 7 — Can be triggered by unauthenticated crafted headers (15%).
- Supply Chain = 4 — Vulnerability resides in open-source agent ecosystem (15%).
- Business Impact = 7 — Potential influence on troubleshooting output and automated decisions (25%).
Sources
- OpenClaw log poisoning advisory — GitHub (GHSA-g27f-9qjv-22pm), Feb 14 2026. (GitHub)
- Log poisoning analysis in OpenClaw — Eye Security, Feb 17 2026. (Eye Research)
