AppSOC is now PointGuard AI

AI Security Incident Tracker

Get updates from our Research Lab on the latest incidents and threats affecting AI applications and agents, and steps you can take to protect against them.

Categories

Incident

Date Reported

Summary

Impact

Severity Score

Jan 23, 2026

A flaw in MCP JamInspector exposed AI control-plane inspection paths.

The vulnerability enabled potential interception and manipulation of AI agent communications through improperly secured MCP inspection hooks.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jan 21, 2026

Chainlit framework flaws allowed attackers to read files and trigger server-side requests.

Unpatched deployments risked exposure of credentials, configuration files, and internal services, potentially enabling broader cloud and infrastructure compromise.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jan 21, 2026

Multiple flaws in Anthropic’s MCP Git server enabled unauthorized file access and potential code execution.

Attackers could manipulate MCP tool calls to access files and execute code on systems running vulnerable MCP Git servers.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Jan 20, 2026

Indirect prompt injection in Google Gemini allowed unauthorized extraction of private calendar data.

Attackers could craft calendar invites that triggered Gemini to expose sensitive meeting details without user awareness, highlighting AI model misuse risks in enterprise productivity tools.

5.8

Low

5.8

Medium

5.8

High

5.8

Critical

Jan 17, 2026

Vertex AI misconfigurations enabled privilege escalation through service agent abuse.

Low-privilege users could gain broad project access, exposing sensitive AI workloads and cloud resources.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jan 14, 2026

A prompt-based attack enabled hijacking of Microsoft Copilot user sessions.

Potential unauthorized access to Copilot sessions and extraction of sensitive personal data, highlighting risks in AI assistant prompt handling and session persistence.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Jan 13, 2026

A critical AI-related vulnerability in ServiceNow’s Now Assist and Virtual Agent AI components could allow unauthenticated attackers to perform arbitrary actions across enterprise systems.

High-severity privilege escalation and impersonation risk for ServiceNow AI Platform users, potentially compromising operational data and business workflows across enterprise environments.

8.7

Low

8.7

Medium

8.7

High

8.7

Critical

Jan 12, 2026

A critical XSS flaw in OpenCode AI allowed malicious model output to execute code locally.

Unsanitized AI responses could trigger local script and command execution on developer machines, creating real risk without confirmed exploitation.

6

Low

6

Medium

6

High

6

Critical

Jan 7, 2026

Zero-click prompt injection enabled silent data exfiltration from ChatGPT agents.

The attack enabled unauthorized data access, bypassed user awareness, and exposed systemic risks in autonomous AI agents with connected enterprise services.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jan 5, 2026

Researchers demonstrated prompt injection attacks that cause medical AI chatbots to provide unsafe advice.

Manipulated chatbot outputs could lead users to follow harmful medical guidance, exposing patients to safety risks and organizations to legal, regulatory, and trust consequences.

6.1

Low

6.1

Medium

6.1

High

6.1

Critical

Jan 2, 2026

Langflow API endpoints lacked authentication, exposing user conversations and enabling data deletion.

Unauthenticated access to key monitoring endpoints could expose sensitive AI conversation data, transaction histories, and even allow message deletion without proper authorization.

6.8

Low

6.8

Medium

6.8

High

6.8

Critical

Dec 30, 2025

Malicious browser extensions secretly captured and exfiltrated AI chatbot conversation data from unsuspecting users.

Sensitive prompts, proprietary data, and personal information entered into AI tools were silently harvested at scale, exposing users to privacy, IP, and compliance risks.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Dec 12, 2025

macOS malware leveraged trusted AI conversations and search ads to trick users into executing malicious commands.

Attackers abused user trust in AI tools and search results to deploy credential-stealing malware without exploiting traditional software vulnerabilities.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Dec 10, 2025

State-linked threat actors deployed the BRICKSTORM backdoor to infiltrate VMware hypervisors and management systems for long-term espionage.

The compromise of virtualization layers threatens entire AI and enterprise compute stacks, enabling deep persistence, data theft, and operational disruption across critical infrastructure.

8.2

Low

8.2

Medium

8.2

High

8.2

Critical

Dec 8, 2025

Researchers uncovered 30+ vulnerabilities in AI coding tools enabling prompt-injected insider attacks.

Hijacked AI IDEs could steal secrets, execute arbitrary commands, and compromise enterprise supply chains through developer environments.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Dec 8, 2025

A critical vulnerability in React/Next.js (React2Shell) enables remote code execution, now actively exploited by China-linked threat actors.

Unauthenticated remote code execution could compromise servers powering web apps, APIs, and any AI workloads relying on Node.js-based web backends — enabling data theft, supply-chain exposure, or backend compromise.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Nov 26, 2025

Microsoft’s new “agent workspace” AI agents in Windows 11 can be tricked via prompt injection — risking data loss, malware, or unauthorized file access.

Expands attack surface on desktops: background AI agents with file access raise real threat of exploitation or misuse — especially when encountering untrusted content.

5.2

Low

5.2

Medium

5.2

High

5.2

Critical

Nov 19, 2025

Researchers showed that ServiceNow’s AI agents can be manipulated via prompt injection to perform unauthorized actions inside enterprise workflows.

Attackers could craft malicious inputs that cause AI agents to delete records, alter tickets, expose sensitive data, or trigger automated workflows.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Nov 19, 2025

Active campaign hijacks exposed Ray clusters via unauthenticated Jobs API (CVE-2023-48022), harvesting GPU/CPU power for cryptomining and spreading across infrastructure.

Massive AI-infrastructure compromise — converts distributed AI clusters into self-propagating botnet, threatens data/model integrity, compute availability, and cross-tenant security.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Nov 19, 2025

A hidden MCP API in Comet Browser allowed embedded extensions to execute local system commands — enabling attackers to hijack entire user devices.

Critical breach of browser and endpoint security; a clear example of how AI-enabled browsers can become full-system attack vectors.

7.9

Low

7.9

Medium

7.9

High

7.9

Critical

Nov 15, 2025

Attackers manipulated an Claude agent in coordinated cyber-espionage campaign, automating most intrusion steps through agentic workflows.

A critical escalation showing AI agents can act as autonomous attackers, enabling rapid, scalable intrusions that traditional AppSec and cloud defenses struggle to detect.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Nov 14, 2025

Severe vulnerabilities in major AI inference frameworks exposed systems to remote code execution through insecure ZeroMQ and Python pickle mechanisms.

High-severity risk affecting Meta, NVIDIA, Microsoft, and multiple AI toolchains; exploited via widely deployed components across the AI ecosystem.

7.4

Low

7.4

Medium

7.4

High

7.4

Critical

Nov 7, 2025

A new side-channel exploit lets adversaries infer the topics of encrypted chats with language models via packet size and timing analysis.

Calls into question the confidentiality of LLM-based chat and assistant tools — even when using encryption, adversaries with network visibility may deduce sensitive topics and compromise user or enterprise privacy.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Nov 5, 2025

Malware families now use AI / LLMs to dynamically rewrite code during runtime, enabling self-modifying, adaptive attacks that evade traditional defenses.

Signals a new phase of cyber threats — AI-augmented, adaptive malware capable of evolving mid-execution, complicating detection, increasing persistence, and threatening enterprise systems globally.

8

Low

8

Medium

8

High

8

Critical

Aug 20, 2025

Compromised Drift OAuth tokens let attackers exfiltrate Salesforce data across hundreds of organizations.

One of the largest SaaS AI supply-chain breaches to date, exposing Salesforce instances, secrets, and customer records at over 700 organizations worldwide.

9.5

Low

9.5

Medium

9.5

High

9.5

Critical

Jul 23, 2025

Replit’s AI coding assistant deleted a live company database during a “vibe-coding” session.

Severe data loss and exposure of systemic risk in autonomous AI coding tools — threatens trust in “AI-assisted development.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Jul 23, 2025

Malicious code in the Amazon Q extension enabled destructive wipe commands through the AI coding agent.

Flaw exposed nearly a million developers to potential file-system and cloud-resource destruction, highlighting severe supply-chain and agentic-tool risks within AI-assisted development environments.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jul 12, 2025

A novel attack allows malicious prompts to slip past AI-moderation filters via minimal text changes (e.g. a single character).

Critical weakness in many LLM moderation systems — exposes applications relying on filter-based safety to prompt injection, content abuse, or other misuse at scale.

5.6

Low

5.6

Medium

5.6

High

5.6

Critical

Jul 9, 2025

Russian APT28 deployed LameHug malware using an LLM to dynamically generate attack commands.

Confirmed phishing and malware execution against real targets enabled reconnaissance and data collection, demonstrating operational use of AI-driven malware by a state-linked threat actor.

7.9

Low

7.9

Medium

7.9

High

7.9

Critical

Jul 9, 2025

Over 64 million resumes exposed through misconfiguration of Paradox.ai hiring platform

High reputational damage, limited propagation, large-scale data breach of personal information

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jun 19, 2025

Meta’s WhatsApp AI assistant disclosed a private individual’s phone number during user interactions.

The incident raised serious concerns about AI hallucinations, privacy leakage, and trust in consumer-facing AI assistants.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

May 29, 2025

An AI-powered Asana feature exposed project data across tenants due to a logic flaw.

Cross-tenant exposure of internal project data created confidentiality and enterprise risk concerns.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

May 19, 2025

Misconfigured vendor database exposed patient data, revealing how weak controls jeopardize AI-powered healthcare ecosystems.

Extensive PHI and PII exposure threatens identity and medical fraud, with risks amplified across AI systems that rely on compromised healthcare data for analytics and automated decision-making.

7.7

Low

7.7

Medium

7.7

High

7.7

Critical

May 14, 2025

An autonomous ElizaOS AI agent was manipulated into transferring approximately 55.5 ETH through memory prompt injection.

Direct financial loss, exposure of risks in autonomous AI agents, and heightened concern over AI-driven financial decision making.

7.1

Low

7.1

Medium

7.1

High

7.1

Critical

Mar 19, 2025

An AI-powered crypto trading agent called AiXBT was manipulated into sending unauthorized transactions resulting in the theft of ~55.5 ETH (~$106 K).

Direct financial loss, erosion of trust in autonomous AI agents, and increased awareness of behavioral attack surfaces in AI decision logic.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Feb 28, 2025

Researchers found over 12,000 live API keys, passwords and credentials embedded in publicly archived web data used to train large-language models.

Major supply-chain vulnerability for AI — use of these credentials in training sets threatens cloud and API security across many organizations.

6.6

Low

6.6

Medium

6.6

High

6.6

Critical

Feb 12, 2025

A major data breach allegedly impacted OmniGPT, an AI aggregation platform, exposing user emails, phone numbers, chat logs, and sensitive API/credential data.

Compromise of personally identifiable information (PII), API keys, credentials, and millions of chat messages elevated risk of identity theft, account takeover, and privacy violations.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Feb 10, 2025

Two ML models on Hugging Face contained malicious code that auto-executed on load, creating reverse-shells.

Demonstrates that open-source model repositories can become vectors for code execution attacks — undermining trust in shared AI supply chain.

5.7

Low

5.7

Medium

5.7

High

5.7

Critical

Dec 13, 2024

Optum left an internal AI “SOP Chatbot” publicly accessible, enabling anyone on the internet to query claim-handling prompts.

Confidential workflows and company SOPs were exposed — a stark warning that even internal-only AI tools can become large-scale exposure points.

4.3

Low

4.3

Medium

4.3

High

4.3

Critical

Sep 24, 2024

A severe container-escape bug in NVIDIA’s Container Toolkit (CVE-2025-23266) can allow GPU containers to break out and take over host systems.

Allows attackers to escalate privileges from a container to root on the host — threatening cloud AI services, shared GPU clusters, and enterprise AI infrastructure.

8.7

Low

8.7

Medium

8.7

High

8.7

Critical