AppSOC is now PointGuard AI

OMNI-LEAK: Agents Collude to Leak Data Across Boundaries

Key Takeaways

  • OMNI-LEAK is an academic lab demonstration, not confirmed in real-world exploitation
  • Researchers showed multi-agent prompt injection can bypass data separation assumptions
  • One compromised agent can orchestrate other agents to leak sensitive information
  • Highlights propagation risks in agent ecosystems, especially with shared tools and memory

OMNI-LEAK Demonstrates Multi-Agent Data Leakage in Labs

OMNI-LEAK is a research demonstration showing how prompt injection can be coordinated across multiple AI agents to extract sensitive data, even when access controls exist at the individual-agent level. The work, published as an academic preprint on arXiv – OMNI-LEAK, has not been reported as exploited in real-world enterprise environments. However, it provides a concrete example of how multi-agent orchestration can create emergent security failures, especially when agents share tools, memory, or delegated task execution.

What We Know

Researchers introduced OMNI-LEAK as a multi-agent prompt injection and data leakage technique designed to exploit trust assumptions in agent ecosystems. The demonstration focuses on scenarios where multiple AI agents collaborate, each with different tools, permissions, or roles. In these environments, a compromised or maliciously influenced agent can manipulate other agents into retrieving and disclosing information that should remain restricted.

The OMNI-LEAK work was published as an academic preprint in February 2026 on arXiv. The researchers describe a reproducible methodology for orchestrating leakage across agent boundaries by using indirect instruction chains, task delegation, and cross-agent message passing.

Importantly, OMNI-LEAK is not associated with a specific vendor breach or a confirmed real-world incident. It is a lab demonstration intended to show a class of vulnerability affecting agentic AI architectures broadly. The risk is most relevant for systems where agents operate with autonomy, share memory, or can invoke external tools such as web browsing, database queries, email access, or internal knowledge base retrieval.

What Could Happen

While OMNI-LEAK has not been observed in the field, the demonstrated technique highlights a credible near-term risk as agent deployments expand.

In a typical enterprise multi-agent environment, one agent may have access to customer records, another may have access to internal documentation, and a third may have tool permissions for messaging or ticketing systems. OMNI-LEAK shows how a compromised agent could exploit these divisions by crafting instructions that cause other agents to retrieve restricted data and return it through normal collaboration channels.

The most realistic exploitation path involves indirect prompt injection. For example, a malicious instruction embedded in a document, email, or support ticket could influence an agent that processes it. That agent could then manipulate other agents to fetch sensitive data, bypassing per-agent access assumptions.

This type of attack is amplified by AI properties such as autonomy, delegated task execution, and the ability to summarize or transform data into forms that evade simple keyword-based filtering. If agents can trigger tool calls or external actions, the technique could evolve into real-world data exfiltration through logs, URLs, or outbound messages.

Why It Matters

OMNI-LEAK matters because it challenges a core assumption in many agent architectures: that dividing tasks across multiple agents with different permissions inherently improves security. In practice, collaboration channels become a new attack surface. When agents trust each other’s messages and outputs, the system can behave like a single composite entity with a combined permission set.

This creates a propagation risk unique to agentic AI systems. A compromise in one agent does not stay contained. It can cascade through delegation and shared context, resulting in cross-boundary leakage even if no single agent is authorized to access all sensitive data.

The implications extend beyond confidentiality. Multi-agent leakage can also undermine integrity if malicious instructions cause agents to alter records, file tickets, or take operational actions based on manipulated context.

As enterprises deploy agents for internal operations, customer support, engineering workflows, and automated decision-making, this class of vulnerability becomes increasingly relevant. The attack is not hypothetical. It is a concrete, reproducible demonstration that is likely to inspire real-world attempts as agent platforms mature.

PointGuard AI Perspective

OMNI-LEAK highlights why agentic AI security requires more than traditional access control and per-agent permissioning. In multi-agent systems, the true risk surface emerges from orchestration, delegation, and trust relationships between agents.

PointGuard AI helps organizations secure agent ecosystems by providing continuous monitoring of AI workflow behavior, including cross-agent instruction chains and anomalous delegation patterns. This enables security teams to detect when an agent is being manipulated into retrieving or transmitting sensitive data outside expected task boundaries.

PointGuard AI also supports AI policy enforcement across agent tool usage, ensuring that data access constraints apply not only at the repository level, but throughout the full inference and orchestration lifecycle. For example, even if an agent has read access to a dataset, PointGuard AI can help detect and prevent downstream disclosure into untrusted channels such as chat logs, external connectors, or summary outputs.

Through AI SBOM visibility and dependency mapping, PointGuard AI reduces the blind spots introduced by third-party agent frameworks, orchestration layers, and tool integrations. This is critical because multi-agent risks often originate from emergent behavior across multiple components rather than a single isolated vulnerability.

As agent adoption accelerates, PointGuard AI enables proactive, trustworthy AI deployment by validating governance controls continuously and detecting multi-agent security failures before they become real-world breaches.

Incident Scorecard Details

Total AISSI Score: 6.4/10

Criticality = 7, Demonstrates credible risk of sensitive enterprise data leakage, AISSI weighting: 25%

Propagation = 8, Multi-agent systems enable cascading cross-boundary leakage, AISSI weighting: 20%

Exploitability = 4, Lab-demonstrated and reproducible but not confirmed in-the-wild, AISSI weighting: 15%

Supply Chain = 6, Impacts broad agent frameworks and orchestration layers, AISSI weighting: 15%

Business Impact = 5, High potential impact but no confirmed real-world harm yet, AISSI weighting: 25%

Sources

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

7

Propagation

8

Exploitability

4

Supply Chain

6

Business Impact

5

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.