AppSOC is now PointGuard AI

OpenCode AI UI Turns Chat Output Into Code (CVE-2026-22813)

Key Takeaways

  • OpenCode rendered AI output without proper HTML sanitization
  • Malicious content could execute JavaScript in the local UI context
  • Exploitation could lead to local command execution
  • No confirmed real-world exploitation reported

When AI Output Becomes an Attack Surface

In January 2026, researchers disclosed CVE-2026-22813, a critical cross-site scripting vulnerability affecting the OpenCode AI coding agent. The issue allowed malicious HTML generated by an AI model to execute scripts in the OpenCode web interface. While no exploitation has been reported, the flaw highlights how AI-generated content can become a direct execution pathway when treated as trusted UI input.

What Happened

CVE-2026-22813 was disclosed publicly through the NVD after researchers identified unsafe rendering of AI-generated markdown in the OpenCode web interface. OpenCode is an open-source AI coding agent that provides a browser-based UI running on localhost to display model output and interact with backend automation features.

The vulnerability existed because OpenCode directly inserted AI responses into the DOM without sanitizing HTML or enforcing a restrictive content security policy. If a malicious or manipulated model response contained embedded scripts, the browser would execute them automatically when rendered.

Because the OpenCode UI exposes local APIs intended for legitimate development automation, executing JavaScript in this context could allow attackers to invoke backend endpoints and execute arbitrary system commands. A patch addressing the issue was released in OpenCode version 1.1.10.
Sources include NIST NVD and OpenCVE.

How the Breach Happened

This incident is best categorized as an AI-assisted XSS vulnerability, where the model output itself becomes the attack vector. Unlike traditional XSS, no user-supplied form input was required. Instead, the application implicitly trusted AI-generated content.

The AI-specific risk arises from treating model responses as safe by default. Large language models can generate arbitrary HTML, JavaScript, or event handlers if prompted or influenced by external content. Without sanitization, this output is indistinguishable from malicious payloads.

From a traditional security perspective, the failure was a lack of output encoding and DOM sanitization. From an AI security perspective, it demonstrates a broader pattern where AI systems blur the line between data and executable logic, especially in developer tools that combine AI output with automation capabilities.

Why It Matters

Although there are no reports of active exploitation, CVE-2026-22813 presents a meaningful risk to developer environments. Local AI tools often run with elevated permissions and access to source code, credentials, and build pipelines. Executing arbitrary commands in this context could enable credential theft, source code manipulation, or lateral movement into enterprise systems.

The incident reinforces a growing reality in AI security: AI-generated content must be treated as untrusted input, particularly when rendered in rich interfaces or connected to execution APIs. As AI agents become more autonomous and more tightly integrated into developer workflows, failures like this can quietly introduce powerful attack surfaces.

From a governance standpoint, this case illustrates why secure-by-design AI tooling is essential, even in open-source and developer-focused environments.

PointGuard AI Perspective

CVE-2026-22813 highlights a recurring issue PointGuard AI observes across AI-enabled applications: AI output is frequently trusted in ways that bypass traditional security assumptions.

PointGuard AI helps organizations identify and mitigate these risks by treating AI outputs, agents, and interfaces as first-class security subjects. This includes mapping where AI-generated content flows into execution paths, enforcing policy controls around AI-to-system interactions, and continuously monitoring for anomalous behavior tied to AI agents or developer tools.

In scenarios like OpenCode, runtime controls that restrict what AI outputs are allowed to trigger, combined with visibility into agent behavior and execution chains, significantly reduce risk. AI SBOM visibility and policy enforcement ensure that even experimental or open-source AI components are governed consistently.

As AI agents increasingly act as intermediaries between humans and systems, securing the AI interface layer will be critical to building trustworthy AI ecosystems.

Incident Scorecard Details

Total AISSI Score: 6.0 / 10

Criticality = 8.5
Arbitrary code execution is possible through AI-rendered content.
AISSI weighting: 25%

Propagation = 5.5
Impact limited to local developer environments running vulnerable versions.
AISSI weighting: 20%

Exploitability = 6.5
Requires crafted AI output and user interaction; no automation observed.
AISSI weighting: 15%

Supply Chain = 5.0
Open-source tooling with downstream ecosystem exposure.
AISSI weighting: 15%

Business Impact = 4.0
No confirmed exploitation or business disruption reported to date.
AISSI weighting: 25%

Sources

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

8.5

Propagation

5.5

Exploitability

6.5

Supply Chain

5

Business Impact

4

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.