Mobile MCP Opens the Door to Malicious Android Actions (CVE-2026-35394)

Key Takeaways

  • The mobile_open_url tool passed user input directly into Android intents
  • The issue can enable calls, SMS actions, content access, and other device behaviors
  • The exposure is especially relevant to prompt injection and agent tool misuse
  • The issue was fixed in version 0.0.50

Prompt-driven device actions become a real security issue

A vulnerability in mobile-mcp allows unvalidated URLs to flow into Android’s intent system, making it possible for AI-connected workflows to trigger arbitrary device actions. This is a strong example of how unsafe tool integration can turn prompt manipulation into operational impact. (GitHub)

What We Know

The GitHub Advisory for CVE-2026-35394 states that the mobile_open_url tool passed user-supplied URLs directly to Android’s intent system without scheme validation. Public descriptions say that arbitrary Android intents could be executed, including USSD codes, phone calls, SMS messages, and content provider access. The GitLab advisory records the issue as affecting versions before 0.0.50.

What makes this incident especially useful for the tracker is the AI angle. This is not just an Android bug. It is an MCP-mediated bridge between AI-controlled workflows and real device behavior. That makes it a practical example of how prompt injection or unsafe automation can escape the model boundary and influence external systems.

What Could Happen

Once AI outputs are allowed to drive tools without strict validation, the line between content generation and action execution disappears. In this case, a malicious prompt or manipulated input could steer an agent into invoking a vulnerable mobile tool with dangerous parameters.

That could lead to unauthorized communications, device resource access, or socially engineered follow-on actions such as app-install prompts. The key lesson is that autonomy magnifies ordinary input-validation bugs. When AI systems are connected to execution environments, small validation failures can become real-world attack paths rather than abstract software flaws.

Why It Matters

This incident matters because it shows how AI security failures can move beyond model misuse and into device operations. It highlights the need for stronger boundaries between agent reasoning, tool invocation, and user consent.

For organizations building agent-based mobile experiences, the technical failure is straightforward, but the governance lesson is broader. Unsafe action surfaces need explicit controls, not just better prompts or model tuning. The industry is moving quickly toward connected agents, and this is exactly the kind of issue that should shape safer design patterns early.

PointGuard AI Perspective

The fastest way to reduce this class of risk is to make connected AI tooling visible and governable. PointGuard AI’s AI Discovery helps organizations identify agents, MCP servers, notebooks, and related AI assets before hidden integrations create risk in production. (PointGuard AI)

For environments where model outputs can influence sensitive actions, PointGuard AI’s AI Detection & Response provides inline monitoring of prompts and responses to help catch misuse, data leakage, and prompt-injection-style abuse before it becomes a real incident. (PointGuard AI)

And with AI Governance, teams can apply guardrails and compliance-driven controls to high-risk AI workflows, helping ensure that automated actions are constrained and auditable. (PointGuard AI)

Incident Scorecard Details

Total AISSI Score: 6.2/10

Criticality = 7, device-level actions and content access create meaningful exposure, AISSI weighting: 25%
Propagation = 7, risk can spread through MCP-connected agent workflows, AISSI weighting: 20%
Exploitability = 5, proof-of-concept level abuse is publicly described, AISSI weighting: 15%
Supply Chain = 7, risk depends on external MCP tooling and integration design, AISSI weighting: 15%
Business Impact = 5, credible harm exists but broad confirmed exploitation is not established, AISSI weighting: 25%

Sources

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

7

Propagation

7

Exploitability

5

Supply Chain

7

Business Impact

5

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.