AppSOC is now PointGuard AI

Poisoned MCP Package on GitHub Opens a Backdoor

Key Takeaways

  • Malicious code was intentionally embedded in an MCP package
  • Backdoors executed during installation and runtime
  • Attack enabled remote command execution on developer systems
  • Incident represents confirmed AI supply chain compromise

Malicious MCP Package Distributed Through Trusted Channels

A malicious MCP package was published to a public package repository and distributed through trusted channels, embedding backdoor functionality that executed both during installation and at runtime. The incident represents a confirmed AI supply chain compromise, allowing attackers to gain remote access to systems running AI tooling that depended on the package.

Source: GitHub Malware Advisory Database

What We Know

In October 2025, maintainers identified a malicious MCP package that contained intentionally embedded backdoor functionality. According to the GitHub Malware Advisory, the package executed attacker-controlled code during installation and again during runtime, enabling remote command execution on affected systems.

The malicious logic allowed the attacker to establish outbound connections and maintain persistence. Because MCP packages are commonly used to extend AI assistants and agent frameworks, the compromised package posed a direct risk to AI-enabled development environments.

GitHub classified the package as malicious rather than vulnerable, indicating intentional attacker behavior rather than accidental insecure coding. The package was removed, and warnings were issued to affected users.

Source: GitHub Advisory GHSA-xmqc-rm22-fxq6
Source: OSV Malicious Package Entry

How the Breach Happened

This incident was the result of a classic supply chain attack adapted to AI tooling ecosystems. An attacker published a malicious MCP package that appeared legitimate and was installable through standard dependency workflows.

Once installed, the package executed malicious scripts automatically. Because MCP packages are designed to integrate deeply with AI agents and development tools, the malicious code gained access to system resources without requiring additional user interaction.

The breach did not rely on AI model manipulation. Instead, it exploited trust in package ecosystems that increasingly support AI agents, assistants, and automation frameworks.

Why It Matters

AI development workflows rely heavily on third-party packages to enable rapid experimentation and agent extensibility. A malicious package in this ecosystem can compromise developer systems, leak credentials, or tamper with AI workflows at scale.

This incident demonstrates that AI supply chain attacks are no longer theoretical. Confirmed backdoors embedded in MCP tooling show how attackers can target the infrastructure that supports AI adoption rather than the models themselves.

Organizations deploying AI agents and developer tools must treat AI package ecosystems as high-risk supply chain components, especially where packages execute code automatically.

PointGuard AI Perspective

This incident underscores the importance of securing AI supply chains, not just AI models or prompts.

PointGuard AI helps organizations identify and manage risks introduced by AI-related dependencies through continuous visibility into AI application components and integrations. By monitoring how AI tools, agents, and packages behave at runtime, PointGuard AI enables early detection of anomalous activity that may indicate malicious code execution.

Policy-based controls help limit the blast radius of compromised components by restricting what AI agents and tools are allowed to access or execute. This reduces the impact of malicious packages that attempt to abuse trusted workflows.

By tracking real-world AI supply chain incidents, PointGuard AI supports proactive security decisions that help teams adopt AI technologies without inheriting hidden risks.

Source: AI Supply Chain Security
Source: AI Runtime Defense
Source: AI Security Incident Tracker

Incident Scorecard Details

Total AISSI Score: 8.6/10

Criticality = 9.0, Confirmed backdoor and remote access capability, AISSI weighting: 25%
Propagation = 8.5, Distributed through trusted package ecosystems, AISSI weighting: 20%
Exploitability = 8.5, Automatic execution on install and runtime, AISSI weighting: 15%
Supply Chain = 9.0, Direct compromise of AI tooling supply chain, AISSI weighting: 15%
Business Impact = 8.5, Confirmed malicious activity and system compromise risk, AISSI weighting: 25%

Sources

  • GitHub Malware Advisory Database GHSA-xmqc-rm22-fxq6
  • Open Source Vulnerability Database (OSV) malicious package entry

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

9

Propagation

8.5

Exploitability

8.5

Supply Chain

9

Business Impact

8.5

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.