AppSOC is now PointGuard AI

AI Security Incident Tracker

Get updates from our Research Lab on the latest incidents and threats affecting AI applications and agents, and steps you can take to protect against them.

Categories

   Subscribe for updates:

Subscribe

Incident

Date Reported

Summary

Impact

Severity Score

Mar 11, 2026

Critical Excel vulnerability could trigger Copilot to exfiltrate data with zero user interaction.

A cross-site scripting flaw in Microsoft Excel could allow Copilot Agent mode to silently disclose sensitive data over the network without requiring a victim to click or open malicious content.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Mar 10, 2026

High severity vulnerability in Microsoft MCP servers patched in March Patch Tuesday update.

Unpatched MCP servers could allow attackers to manipulate AI tool interactions or access sensitive systems connected through the Model Context Protocol.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Mar 6, 2026

Context7 MCP Server vulnerability allows attackers to inject malicious instructions into AI coding assistants.

Poisoned documentation could cause AI agents to delete files or exfiltrate sensitive data from developer systems.

6.3

Low

6.3

Medium

6.3

High

6.3

Critical

Mar 3, 2026

Critical vulnerability in ModelScope MS-Agent allows attackers to hijack AI agents and execute system commands.

Critical vulnerability in ModelScope MS-Agent allows attackers to hijack AI agents and execute system commands.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Feb 24, 2026

GitHub Issues abused to inject malicious prompts via Copilot.

Demonstrated repository takeover risk through prompt injection and AI-assisted workflow abuse, highlighting AI supply chain and agentic workflow exposure.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Feb 23, 2026

A compromised npm publish token was used to distribute a malicious version of the open-source Cline CLI that silently installed the OpenClaw AI agent on developer machines.

Unauthorized AI agent installation through a trusted developer tool highlights AI supply chain and agent propagation risks in production workflows.

7.1

Low

7.1

Medium

7.1

High

7.1

Critical

Feb 19, 2026

OMNI-LEAK demonstrates multi-agent prompt injection data leakage in lab settings.

Academic researchers showed how compromised AI agents can coordinate to exfiltrate sensitive data across agent boundaries, highlighting systemic risk in agentic AI architectures even without in-the-wild exploitation.

6.4

Low

6.4

Medium

6.4

High

6.4

Critical

Feb 18, 2026

Microsoft Copilot bug summarized confidential Outlook emails despite DLP controls.

Sensitivity labels and Data Loss Prevention policies failed to block Copilot from processing protected emails, creating potential enterprise compliance and confidentiality exposure.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Feb 17, 2026

A log poisoning flaw in OpenClaw’s WebSocket handler allowed crafted headers to be written into logs, creating an indirect prompt injection risk.

Attackers could influence agent reasoning context via poisoned logs, potentially skewing suggestions, disclosures, or automated actions.

6.2

Low

6.2

Medium

6.2

High

6.2

Critical

Feb 13, 2026

Command injection flaw in GitHub Copilot enables potential remote code execution.

Vulnerability in Copilot’s Visual Studio integration could allow attackers to execute arbitrary commands in developer environments and CI pipelines if exploited.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Feb 12, 2026

LLM-generated malware exploited the critical React2Shell vulnerability.

AI-assisted malware targeted exposed Docker environments, deploying cryptominers and demonstrating how large language models accelerate exploit development and lower barriers for cybercriminals.

8

Low

8

Medium

8

High

8

Critical

Feb 12, 2026

Google blocks large-scale AI model extraction attempt.

Over 100,000 malicious prompts targeted Gemini’s reasoning logic, exposing intellectual property risks but no confirmed user data breach.

7.4

Low

7.4

Medium

7.4

High

7.4

Critical

Feb 11, 2026

LangChain vulnerability allowed SSRF via unvalidated image URL handling.

Improper validation of image URLs in LangChain token counting logic could enable attackers to trigger server-side request forgery and access internal network resources.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Feb 10, 2026

vLLM framework flaw enables remote code execution via malicious input.

Attackers can trigger arbitrary code execution on servers running vLLM by submitting crafted multimedia input, potentially compromising AI infrastructure and downstream enterprise systems.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Feb 10, 2026

Prompt injection flaw in Gemini-powered Google Translate Advanced mode.

Attackers can embed instructions in translation input, causing Gemini to follow arbitrary commands instead of translating, exposing weaknesses in AI task boundaries and prompt handling.

5.8

Low

5.8

Medium

5.8

High

5.8

Critical

Feb 9, 2026

A critical zero-click RCE vulnerability in Anthropic’s Claude Desktop Extensions can compromise a system from a crafted Google Calendar event.

Attackers can achieve remote code execution without user interaction through AI tool chaining vulnerabilities in Model Context Protocol (MCP) connectors.

8.4

Low

8.4

Medium

8.4

High

8.4

Critical

Feb 9, 2026

Researchers identify a new vulnerability class targeting AI logic and persistent memory layers.

Demonstrates how stored context can silently manipulate agent behavior across sessions without immediate detection.

6.8

Low

6.8

Medium

6.8

High

6.8

Critical

Feb 5, 2026

AnythingLLM exposed a vector database API key via unauthenticated endpoint.

Unauthenticated users could retrieve the Qdrant API key, enabling unauthorized access to stored embeddings, retrieval data, and potentially downstream AI workflows relying on that vector store.

6.2

Low

6.2

Medium

6.2

High

6.2

Critical

Feb 5, 2026

A Moltbook backend exposure leaked tokens, credentials, and agent session control data.

Leaked API keys and login tokens enabled unauthorized access, potential agent hijacking, and manipulation of autonomous agent sessions.

8.5

Low

8.5

Medium

8.5

High

8.5

Critical

Feb 4, 2026

A large study found widespread hardcoded secrets in Android AI apps.

Exposed API keys and cloud credentials could enable unauthorized access, data leaks, and backend compromise across thousands of AI-enabled mobile applications.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Feb 3, 2026

A one-click OpenClaw flaw enabled token theft and remote code execution.

Attackers could hijack OpenClaw agent instances, execute arbitrary commands, steal secrets, and compromise connected systems by tricking a user into opening a malicious link.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Feb 3, 2026

Metadata Mayhem: Python Libraries Turned Into RCE Traps

Attackers could trigger remote code execution in AI pipelines by loading crafted artifacts, enabling credential theft, lateral movement, and full compromise of model infrastructure.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Feb 2, 2026

Langflow agent framework exposed critical API endpoints without authentication, enabling unauthorized access to sensitive data and workflow controls.

Demonstrates how missing authentication in agent frameworks can lead to data exposure and workflow manipulation.

6.4

Low

6.4

Medium

6.4

High

6.4

Critical

Feb 2, 2026

Attackers abused OpenClaw’s ClawHub skills ecosystem and impersonated tools to distribute malware.

Malicious skills and fake developer tools enabled credential theft, data exfiltration, and potential unauthorized remote access on affected user systems.

8.4

Low

8.4

Medium

8.4

High

8.4

Critical

Jan 28, 2026

Rapid adoption of Clawdbot, Moltbot, and OpenClaw exposed agent control planes and sensitive credentials.

Exposed agent deployments leaked API keys and chat data, allowing unauthorized access, potential agent takeover, and execution of privileged actions.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Jan 27, 2026

Unauthenticated MCP endpoints exposed Clawdbot AI agents to takeover.

Hundreds of AI agents leaked credentials, private chats, and enabled unauthorized command execution due to insecure MCP deployment.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Jan 23, 2026

A flaw in MCP JamInspector exposed AI control-plane inspection paths.

The vulnerability enabled potential interception and manipulation of AI agent communications through improperly secured MCP inspection hooks.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jan 22, 2026

A Copilot Studio flaw enabled unauthenticated access to sensitive information.

The vulnerability could expose tenant data from Copilot Studio, creating privacy, compliance, and downstream account compromise risks.

6.8

Low

6.8

Medium

6.8

High

6.8

Critical

Jan 22, 2026

Typebot flaw lets malicious bots steal stored user credentials during preview runs.

Attackers can exfiltrate OpenAI keys, Google tokens, and SMTP passwords from victims who preview malicious typebots.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jan 21, 2026

Chainlit framework flaws allowed attackers to read files and trigger server-side requests.

Unpatched deployments risked exposure of credentials, configuration files, and internal services, potentially enabling broader cloud and infrastructure compromise.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jan 21, 2026

Multiple flaws in Anthropic’s MCP Git server enabled unauthorized file access and potential code execution.

Attackers could manipulate MCP tool calls to access files and execute code on systems running vulnerable MCP Git servers.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Jan 20, 2026

Unsafe HTML rendering in an AI MCP client enabled code execution.

Malicious content could lead to remote command execution on affected systems.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Jan 20, 2026

Indirect prompt injection in Google Gemini allowed unauthorized extraction of private calendar data.

Attackers could craft calendar invites that triggered Gemini to expose sensitive meeting details without user awareness, highlighting AI model misuse risks in enterprise productivity tools.

5.8

Low

5.8

Medium

5.8

High

5.8

Critical

Jan 17, 2026

Vertex AI misconfigurations enabled privilege escalation through service agent abuse.

Low-privilege users could gain broad project access, exposing sensitive AI workloads and cloud resources.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jan 16, 2026

AI assistant web retrieval tool exposed information through unsafe external content handling.

Misuse of AI tooling could enable information leakage or abuse of backend infrastructure.

6.4

Low

6.4

Medium

6.4

High

6.4

Critical

Jan 14, 2026

A prompt-based attack enabled hijacking of Microsoft Copilot user sessions.

Potential unauthorized access to Copilot sessions and extraction of sensitive personal data, highlighting risks in AI assistant prompt handling and session persistence.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Jan 13, 2026

A critical AI-related vulnerability in ServiceNow’s Now Assist and Virtual Agent AI components could allow unauthenticated attackers to perform arbitrary actions across enterprise systems.

High-severity privilege escalation and impersonation risk for ServiceNow AI Platform users, potentially compromising operational data and business workflows across enterprise environments.

8.7

Low

8.7

Medium

8.7

High

8.7

Critical

Jan 12, 2026

Prompt injection flaw in an AI code editor enables unauthorized file modification.

Demonstrates how AI assisted development tools can bridge prompt manipulation into filesystem level compromise.

6.3

Low

6.3

Medium

6.3

High

6.3

Critical

Jan 12, 2026

A critical XSS flaw in OpenCode AI allowed malicious model output to execute code locally.

Unsanitized AI responses could trigger local script and command execution on developer machines, creating real risk without confirmed exploitation.

6

Low

6

Medium

6

High

6

Critical

Jan 7, 2026

Zero-click prompt injection enabled silent data exfiltration from ChatGPT agents.

The attack enabled unauthorized data access, bypassed user awareness, and exposed systemic risks in autonomous AI agents with connected enterprise services.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jan 5, 2026

Researchers demonstrated prompt injection attacks that cause medical AI chatbots to provide unsafe advice.

Manipulated chatbot outputs could lead users to follow harmful medical guidance, exposing patients to safety risks and organizations to legal, regulatory, and trust consequences.

6.1

Low

6.1

Medium

6.1

High

6.1

Critical

Jan 2, 2026

Langflow API endpoints lacked authentication, exposing user conversations and enabling data deletion.

Unauthenticated access to key monitoring endpoints could expose sensitive AI conversation data, transaction histories, and even allow message deletion without proper authorization.

6.8

Low

6.8

Medium

6.8

High

6.8

Critical

Dec 30, 2025

Malicious browser extensions secretly captured and exfiltrated AI chatbot conversation data from unsuspecting users.

Sensitive prompts, proprietary data, and personal information entered into AI tools were silently harvested at scale, exposing users to privacy, IP, and compliance risks.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Dec 12, 2025

macOS malware leveraged trusted AI conversations and search ads to trick users into executing malicious commands.

Attackers abused user trust in AI tools and search results to deploy credential-stealing malware without exploiting traditional software vulnerabilities.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Dec 10, 2025

State-linked threat actors deployed the BRICKSTORM backdoor to infiltrate VMware hypervisors and management systems for long-term espionage.

The compromise of virtualization layers threatens entire AI and enterprise compute stacks, enabling deep persistence, data theft, and operational disruption across critical infrastructure.

8.2

Low

8.2

Medium

8.2

High

8.2

Critical

Dec 8, 2025

Researchers uncovered 30+ vulnerabilities in AI coding tools enabling prompt-injected insider attacks.

Hijacked AI IDEs could steal secrets, execute arbitrary commands, and compromise enterprise supply chains through developer environments.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Dec 8, 2025

A critical vulnerability in React/Next.js (React2Shell) enables remote code execution, now actively exploited by China-linked threat actors.

Unauthenticated remote code execution could compromise servers powering web apps, APIs, and any AI workloads relying on Node.js-based web backends — enabling data theft, supply-chain exposure, or backend compromise.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Nov 26, 2025

Microsoft’s new “agent workspace” AI agents in Windows 11 can be tricked via prompt injection — risking data loss, malware, or unauthorized file access.

Expands attack surface on desktops: background AI agents with file access raise real threat of exploitation or misuse — especially when encountering untrusted content.

5.2

Low

5.2

Medium

5.2

High

5.2

Critical

Nov 19, 2025

Researchers showed that ServiceNow’s AI agents can be manipulated via prompt injection to perform unauthorized actions inside enterprise workflows.

Attackers could craft malicious inputs that cause AI agents to delete records, alter tickets, expose sensitive data, or trigger automated workflows.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Nov 19, 2025

Active campaign hijacks exposed Ray clusters via unauthenticated Jobs API (CVE-2023-48022), harvesting GPU/CPU power for cryptomining and spreading across infrastructure.

Massive AI-infrastructure compromise — converts distributed AI clusters into self-propagating botnet, threatens data/model integrity, compute availability, and cross-tenant security.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Nov 19, 2025

A hidden MCP API in Comet Browser allowed embedded extensions to execute local system commands — enabling attackers to hijack entire user devices.

Critical breach of browser and endpoint security; a clear example of how AI-enabled browsers can become full-system attack vectors.

7.9

Low

7.9

Medium

7.9

High

7.9

Critical

Nov 15, 2025

Attackers manipulated an Claude agent in coordinated cyber-espionage campaign, automating most intrusion steps through agentic workflows.

A critical escalation showing AI agents can act as autonomous attackers, enabling rapid, scalable intrusions that traditional AppSec and cloud defenses struggle to detect.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Nov 14, 2025

Severe vulnerabilities in major AI inference frameworks exposed systems to remote code execution through insecure ZeroMQ and Python pickle mechanisms.

High-severity risk affecting Meta, NVIDIA, Microsoft, and multiple AI toolchains; exploited via widely deployed components across the AI ecosystem.

7.4

Low

7.4

Medium

7.4

High

7.4

Critical

Nov 7, 2025

A new side-channel exploit lets adversaries infer the topics of encrypted chats with language models via packet size and timing analysis.

Calls into question the confidentiality of LLM-based chat and assistant tools — even when using encryption, adversaries with network visibility may deduce sensitive topics and compromise user or enterprise privacy.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Nov 5, 2025

Malware families now use AI / LLMs to dynamically rewrite code during runtime, enabling self-modifying, adaptive attacks that evade traditional defenses.

Signals a new phase of cyber threats — AI-augmented, adaptive malware capable of evolving mid-execution, complicating detection, increasing persistence, and threatening enterprise systems globally.

8

Low

8

Medium

8

High

8

Critical

Oct 21, 2025

Cursor CLI loaded unsafe project configuration from untrusted repositories.

Attackers could execute commands by tricking users into opening malicious projects.

7.5

Low

7.5

Medium

7.5

High

7.5

Critical

Oct 19, 2025

MCP OAuth response handling allowed command injection from untrusted servers.

Attackers could execute commands during OAuth exchanges with AI tools.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Oct 14, 2025

Malicious MCP package installed backdoors during installation and runtime.

Attackers gained remote access through compromised AI tool supply chain packages according to GitHub advisory

8.6

Low

8.6

Medium

8.6

High

8.6

Critical

Oct 9, 2025

Unauthenticated RCE flaw exposed AI MCP server used by design workflows.

ttackers could execute arbitrary commands without authentication.

7.7

Low

7.7

Medium

7.7

High

7.7

Critical

Sep 25, 2025

Malicious MCP server impersonated Postmark to silently exfiltrate emails.

Attackers intercepted outbound email content and metadata from AI workflows.

8.2

Low

8.2

Medium

8.2

High

8.2

Critical

Aug 20, 2025

Compromised Drift OAuth tokens let attackers exfiltrate Salesforce data across hundreds of organizations.

One of the largest SaaS AI supply-chain breaches to date, exposing Salesforce instances, secrets, and customer records at over 700 organizations worldwide.

9.5

Low

9.5

Medium

9.5

High

9.5

Critical

Jul 23, 2025

Replit’s AI coding assistant deleted a live company database during a “vibe-coding” session.

Severe data loss and exposure of systemic risk in autonomous AI coding tools — threatens trust in “AI-assisted development.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Jul 23, 2025

Malicious code in the Amazon Q extension enabled destructive wipe commands through the AI coding agent.

Flaw exposed nearly a million developers to potential file-system and cloud-resource destruction, highlighting severe supply-chain and agentic-tool risks within AI-assisted development environments.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jul 12, 2025

A novel attack allows malicious prompts to slip past AI-moderation filters via minimal text changes (e.g. a single character).

Critical weakness in many LLM moderation systems — exposes applications relying on filter-based safety to prompt injection, content abuse, or other misuse at scale.

5.6

Low

5.6

Medium

5.6

High

5.6

Critical

Jul 9, 2025

Russian APT28 deployed LameHug malware using an LLM to dynamically generate attack commands.

Confirmed phishing and malware execution against real targets enabled reconnaissance and data collection, demonstrating operational use of AI-driven malware by a state-linked threat actor.

7.9

Low

7.9

Medium

7.9

High

7.9

Critical

Jul 9, 2025

Over 64 million resumes exposed through misconfiguration of Paradox.ai hiring platform

High reputational damage, limited propagation, large-scale data breach of personal information

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jun 19, 2025

Meta’s WhatsApp AI assistant disclosed a private individual’s phone number during user interactions.

The incident raised serious concerns about AI hallucinations, privacy leakage, and trust in consumer-facing AI assistants.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

May 29, 2025

An AI-powered Asana feature exposed project data across tenants due to a logic flaw.

Cross-tenant exposure of internal project data created confidentiality and enterprise risk concerns.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

May 19, 2025

Misconfigured vendor database exposed patient data, revealing how weak controls jeopardize AI-powered healthcare ecosystems.

Extensive PHI and PII exposure threatens identity and medical fraud, with risks amplified across AI systems that rely on compromised healthcare data for analytics and automated decision-making.

7.7

Low

7.7

Medium

7.7

High

7.7

Critical

May 14, 2025

An autonomous ElizaOS AI agent was manipulated into transferring approximately 55.5 ETH through memory prompt injection.

Direct financial loss, exposure of risks in autonomous AI agents, and heightened concern over AI-driven financial decision making.

7.1

Low

7.1

Medium

7.1

High

7.1

Critical

Mar 19, 2025

An AI-powered crypto trading agent called AiXBT was manipulated into sending unauthorized transactions resulting in the theft of ~55.5 ETH (~$106 K).

Direct financial loss, erosion of trust in autonomous AI agents, and increased awareness of behavioral attack surfaces in AI decision logic.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Feb 28, 2025

Researchers found over 12,000 live API keys, passwords and credentials embedded in publicly archived web data used to train large-language models.

Major supply-chain vulnerability for AI — use of these credentials in training sets threatens cloud and API security across many organizations.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Feb 12, 2025

A major data breach allegedly impacted OmniGPT, an AI aggregation platform, exposing user emails, phone numbers, chat logs, and sensitive API/credential data.

Compromise of personally identifiable information (PII), API keys, credentials, and millions of chat messages elevated risk of identity theft, account takeover, and privacy violations.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Feb 10, 2025

Two ML models on Hugging Face contained malicious code that auto-executed on load, creating reverse-shells.

Demonstrates that open-source model repositories can become vectors for code execution attacks — undermining trust in shared AI supply chain.

5.7

Low

5.7

Medium

5.7

High

5.7

Critical

Dec 13, 2024

Optum left an internal AI “SOP Chatbot” publicly accessible, enabling anyone on the internet to query claim-handling prompts.

Confidential workflows and company SOPs were exposed — a stark warning that even internal-only AI tools can become large-scale exposure points.

4.3

Low

4.3

Medium

4.3

High

4.3

Critical

Sep 24, 2024

A severe container-escape bug in NVIDIA’s Container Toolkit (CVE-2025-23266) can allow GPU containers to break out and take over host systems.

Allows attackers to escalate privileges from a container to root on the host — threatening cloud AI services, shared GPU clusters, and enterprise AI infrastructure.

8.7

Low

8.7

Medium

8.7

High

8.7

Critical