AppSOC is now PointGuard AI

AI Code Editor Turns Prompts Into File Overwrites ( CVE-2025-64108)

Key Takeaways

  • Prompt injection enabled unintended filesystem access
  • AI assisted coding workflows expanded the attack surface
  • Exploit bridges model behavior and operating system functions
  • No confirmed in the wild exploitation reported

Prompt Injection Escapes the Model Sandbox

A newly disclosed vulnerability in an AI powered code editor demonstrates how prompt injection can escalate beyond model manipulation into direct interaction with local system resources. By exploiting the way AI generated actions were translated into file operations, attackers could overwrite protected files, exposing a new class of risk in AI assisted development environments.

What We Know

On January 12, 2026, the National Vulnerability Database published details for CVE-2025-64108, a vulnerability affecting Cursor, an AI assisted code editor that integrates large language model capabilities directly into developer workflows. According to the NVD entry, the issue allows prompt injection to interact with NTFS path handling in a way that bypasses expected file protections.

The vulnerability arises when crafted prompts influence how the AI component generates file operations, allowing writes to locations that should be restricted. While the flaw relies on underlying operating system behavior, the exploit path is initiated through AI prompt manipulation rather than direct user commands
(NVD CVE-2025-64108).

The issue was reported responsibly and patched by the vendor. No public evidence indicates that the vulnerability was exploited in real world attacks at the time of disclosure.

What Could Happen

If exploited, this class of vulnerability could allow attackers to abuse AI assisted coding tools to modify sensitive files, inject malicious code, or alter configuration settings without direct user intent. Because the attack begins with prompt manipulation, it could bypass traditional safeguards that assume file operations are initiated explicitly by a developer.

AI specific characteristics amplify the risk. AI coding tools are designed to act autonomously on developer instructions, translating natural language prompts into executable actions. When those actions are not constrained by strict policy enforcement, prompt injection can become a bridge from semantic manipulation into system level compromise.

Similar patterns could emerge in other AI powered development environments where models are granted elevated access to local resources.

Why It Matters

AI assisted development tools are rapidly becoming part of standard software workflows. Vulnerabilities like this highlight how tightly coupling AI output with privileged system operations can introduce new security risks that traditional application threat models do not fully address.

From a business perspective, compromised development environments can lead to supply chain risk, malicious code introduction, and loss of trust in AI tooling. Even in the absence of confirmed exploitation, the vulnerability underscores the need for stronger isolation and policy enforcement between AI generated actions and sensitive system resources.

This incident also reinforces guidance in the NIST AI Risk Management Framework around managing autonomy and limiting unintended behavior in AI systems used in critical workflows.

PointGuard AI Perspective

PointGuard AI helps organizations identify and reduce risks introduced by AI assisted development and agentic workflows. By providing visibility into how AI systems interact with local resources, APIs, and development pipelines, PointGuard AI enables teams to detect unexpected behavior before it results in compromise.

Policy enforcement capabilities help ensure that AI generated actions are constrained to approved operations, preventing prompt injection from escalating into filesystem access or code modification. Continuous monitoring allows organizations to detect anomalous tool behavior that may indicate misuse or exploitation.

As AI becomes embedded in software creation and operational workflows, PointGuard AI supports secure adoption by enforcing guardrails that align AI autonomy with organizational security and governance requirements.

Incident Scorecard Details

Total AISSI Score: 6.3/10

Criticality = 7, Impacts development environments and local system integrity, AISSI weighting: 25%

Propagation = 5, Limited to affected installations and environments, AISSI weighting: 20%

Exploitability = 4, Proof of concept vulnerability with patch available, AISSI weighting: 15%

Supply Chain = 6, Affects third party AI development tooling, AISSI weighting: 15%

Business Impact = 4, No confirmed exploitation or material harm reported, AISSI weighting: 25%

Sources

National Vulnerability Database CVE-2025-64108
https://nvd.nist.gov/vuln/detail/CVE-2025-64108

Cursor Security Advisory
https://cursor.sh/security

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

7

Propagation

5

Exploitability

4

Supply Chain

6

Business Impact

4

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.