Microsoft Copilot “Reprompt” Attack Enables Session Hijack and Data Exfiltration
Key Takeaways
- A crafted link exploiting Copilot’s prompt parameter enabled session takeovers
- Attack persisted beyond chat closure and could siphon sensitive data
- The vulnerability relied on prompt injection and back-and-forth request techniques
- Microsoft has fixed the issue; no evidence of wild exploitation
Microsoft Copilot “Reprompt” Attack: Silent Session Hijacking and Data Theft
On January 14, 2026, security news outlets reported on a newly identified AI security threat against Microsoft Copilot. Researchers at Varonis Threat Labs described a technique they call “Reprompt”, which exploited how Copilot processes URL-based prompts to hijack a user’s active session and enable covert data exfiltration without additional user interaction. The issue was responsibly disclosed to Microsoft and resolved as of Patch Tuesday in January 2026. (BleepingComputer)
What Happened: Incident Overview
The Reprompt attack centers on how Microsoft Copilot handles prompt inputs included in a URL parameter labeled 'q'. By embedding malicious instructions in this parameter, an attacker could cause Copilot to automatically execute these prompts as soon as the page loaded—even if the victim did not manually interact with the AI interface. After the initial malicious link was clicked, Copilot’s protections against data leakage could be bypassed by crafting multiple back-to-back requests that effectively neutered the model’s guardrails. (SecurityWeek)
Researchers demonstrated that, by exploiting a parameter-to-prompt (P2P) injection, a double-request technique, and a chain-request method, an attacker could maintain an ongoing exchange with Copilot. This iterative communication would allow silent extraction of data from the victim’s session context, with each response triggering the next exfiltration request in a persistent loop. (SecurityWeek)
Copilot’s traditional data-leak protections only applied to the first request. The Reprompt method manipulated this behavior by sending the same request twice, enabling the second response to include sensitive user information that should have been suppressed. Additionally, by chaining follow-up instructions from an attacker’s server, the technique could continually siphon information without alerting the user. (SecurityWeek)
How the Breach Could Happen
The core vulnerability exploited by Reprompt is not a conventional software bug but an inherent weakness in how Copilot interprets and executes prompts passed via URL parameters. Copilot by design auto-populates prompts and processes them without requiring explicit user steps beyond an initial click.
The attack required only that:
- A victim click a malicious link containing an embedded prompt
- Copilot process the prompt automatically on load
- The attacker supply further instructions through chained server requests
This multi-stage mechanism enabled Copilot to follow attacker instructions beyond the initial prompt execution, leveraging active session tokens and persistent session context—even if the Copilot tab was closed. Regular client-side security tools could not detect this behavior because the exfiltration occurred during dynamic back-and-forth requests rather than in the initial prompt payload. (SecurityWeek)
Impact: Why It Matters
Although there is no evidence of Reprompt being exploited in the wild, the technique highlights a subtle and significant class of AI security vulnerabilities. By leveraging prompt handling behaviors and persistent session contexts, attackers could execute actions and extract data far beyond what traditional prompt injection flaws allowed.
The incident is especially concerning because Reprompt:
- Operated with a single user click, requiring no further interaction
- Persisted beyond the initial Copilot session closure
- Bypassed existing data-leak protections on subsequent requests
- Was invisible to typical client-side monitoring tools (SecurityWeek)
This event demonstrates that AI assistants with persistent session contexts and deep platform integrations can be misused to access sensitive information and perform actions without explicit user intent. It also shows why AI threat modeling must consider multi-stage and session-oriented exploits rather than only simple prompt manipulation.
PointGuard AI Perspective
From the PointGuard AI perspective, the Reprompt incident underscores a broader truth about AI security: conversation context and session persistence expand the attack surface beyond traditional software models.
AI assistants that automatically interpret prompts, execute instructions without explicit human confirmation, and retain session state create unique vectors that adversaries can manipulate. Even when prompt injection flaws are known, naive defenses that only inspect the initial user input may fail to detect chained or dynamic command sequences utilized by attackers.
To mitigate similar attacks, organizations should adopt:
- Runtime behavior monitoring of session-oriented AI systems
- Guardrails that verify intent and execution patterns beyond the first prompt
- Threat modeling that includes multi-stage prompt exploitation
- Continuous validation of AI assistant interactions against policy baselines
This incident also highlights the importance of vendor-level patching and rapid response, but it should remind security teams that published fixes alone do not address all downstream risk vectors. Continuous observability, layered controls, and adversarial testing are critical to reducing exploitation opportunities in agentic AI platforms.
Incident Scorecard Details
Total AISSI Score: 8.3/10
Criticality = 8.0, Session hijacking and data exfiltration via prompt mechanism
Propagation = 7.5, Potential for broad exploitation via phishing vectors
Exploitability = 8.5, Only one user click required; no credentials needed
Supply Chain = 4.5, Client AI assistant risk, not third-party compromise
Business Impact = 8.0, Potential exposure of sensitive user and contextual data
Sources
- BleepingComputer: Reprompt attack hijacked Microsoft Copilot sessions for data theft — January 14, 2026 (BleepingComputer)
- SecurityWeek: New ‘Reprompt’ attack silently siphons Microsoft Copilot data — January 15, 2026 (SecurityWeek)
