ChatGPT Prompt Injection Enables Silent DNS Data Exfiltration

Key Takeaways

  • Prompt injection enables silent data exfiltration from ChatGPT
  • DNS used as covert channel to bypass detection
  • Indirect injection via documents and web content expands attack surface
  • No visible indicators for users or defenders

ChatGPT Flaw Enables Silent Data Exfiltration via DNS

Researchers discovered a vulnerability in ChatGPT that allows attackers to exfiltrate sensitive data through DNS requests triggered by prompt injection. As reported by TechRadar coverage of the disclosure, the issue demonstrates how indirect prompt injection can lead to stealthy data leakage without user awareness. (techradar.com)

What We Know

In early 2026, researchers from Check Point Research identified a vulnerability in ChatGPT related to how it processes external content such as emails, PDFs, and web pages. The issue allows attackers to embed hidden instructions that the model interprets as legitimate prompts.

When processed, these instructions can cause ChatGPT to extract sensitive data from ongoing conversations and encode it into DNS queries sent to attacker-controlled domains. This technique leverages DNS as a covert exfiltration channel, which is typically trusted and rarely inspected in depth.

The attack does not require explicit user interaction beyond opening or processing malicious content. According to reporting, OpenAI acknowledged the issue and implemented mitigations following disclosure.
Source: TechRadar report on ChatGPT data leakage vulnerability

Additional analysis highlights that this attack is a form of indirect prompt injection, where malicious instructions are delivered through third-party content rather than direct user input. See Check Point Research analysis of prompt injection risks for broader context on this attack class. (research.checkpoint.com)

What Happened

The vulnerability combines prompt injection with covert data exfiltration techniques.

Attackers embed malicious instructions in external content such as documents or web pages. When ChatGPT processes this content, it interprets the instructions as valid prompts. These instructions direct the model to retrieve sensitive information from the conversation context.

The model is then manipulated into encoding this data into DNS queries. Because DNS is a trusted protocol and often not tightly monitored, the exfiltration can occur without triggering traditional security controls.

The root issue is the lack of clear trust boundaries between user input, external content, and model instructions. ChatGPT cannot reliably distinguish between benign and malicious instructions embedded in third-party content.

Additionally, the system allows model outputs to influence outbound network interactions without sufficient validation. This creates a direct path from prompt injection to data exfiltration.

Why It Matters

This incident introduces a new level of stealth in AI-driven attacks. Unlike traditional data exfiltration methods, this approach leverages normal system behavior to avoid detection.

Organizations using AI tools to process sensitive data are particularly at risk. This includes internal communications, customer data, and proprietary information. Because the attack operates silently, it may remain undetected for extended periods.

The use of indirect prompt injection significantly expands the attack surface. Any external content source, including emails and documents, can serve as an attack vector.

From a compliance perspective, the potential exposure of sensitive data raises concerns under regulations such as GDPR and emerging AI governance frameworks. Even without confirmed exploitation, the risk of silent data leakage is significant.

This incident reinforces the need for stronger controls around how AI systems process external inputs and initiate outbound communications.

PointGuard AI Perspective

The ChatGPT data exfiltration incident highlights the need to control how AI systems interact with untrusted content and external networks. Prompt injection becomes far more dangerous when models can trigger outbound communication.

PointGuard AI helps mitigate this risk through continuous monitoring of AI inputs and outputs. Its runtime detection capabilities identify patterns consistent with prompt injection and data exfiltration attempts before sensitive data is exposed.
Learn more: https://www.pointguardai.com/faq/ai-runtime-detection-response

The platform also enforces strict trust boundaries for external content. By validating inputs from documents, emails, and web sources, PointGuard AI prevents untrusted data from influencing model behavior in unsafe ways.
Learn more: https://www.pointguardai.com/ai-security-governance

In addition, PointGuard AI provides visibility into AI system interactions and dependencies, helping organizations understand where sensitive data flows and how it could be exposed. This aligns with the need for AI supply chain awareness highlighted in modern security frameworks.
Learn more: https://www.pointguardai.com/blog/from-sbom-to-ai-bom-rethinking-supply-chain-security-in-the-ai-era

As AI systems become more integrated into enterprise workflows, proactive controls over input handling, model behavior, and outbound communication are essential to maintaining security and trust.

Incident Scorecard Details

Total AISSI Score: 7.3/10

  • Criticality = 8, Sensitive conversational and enterprise data at risk, AISSI weighting: 25%
  • Propagation = 7, Indirect injection via multiple content sources increases reach, AISSI weighting: 20%
  • Exploitability = 6, Demonstrated research with realistic attack scenario, AISSI weighting: 15%
  • Supply Chain = 7, Reliance on external content and hosted AI services increases exposure, AISSI weighting: 15%
  • Business Impact = 6, Patched vulnerability with no confirmed exploitation, AISSI weighting: 25%

Sources

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

8

Propagation

7

Exploitability

6

Supply Chain

7

Business Impact

6

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.