AppSOC is now PointGuard AI

AI Security Incident Tracker

Get updates from our Research Lab on the latest incidents and threats affecting AI applications and agents, and steps you can take to protect against them.

Categories

Incident

Date Reported

Summary

Impact

Severity Score

Jan 7, 2026

Zero-click prompt injection enabled silent data exfiltration from ChatGPT agents.

The attack enabled unauthorized data access, bypassed user awareness, and exposed systemic risks in autonomous AI agents with connected enterprise services.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Jan 5, 2026

Researchers demonstrated prompt injection attacks that cause medical AI chatbots to provide unsafe advice.

Manipulated chatbot outputs could lead users to follow harmful medical guidance, exposing patients to safety risks and organizations to legal, regulatory, and trust consequences.

7.1

Low

7.1

Medium

7.1

High

7.1

Critical

Dec 30, 2025

Malicious browser extensions secretly captured and exfiltrated AI chatbot conversation data from unsuspecting users.

Sensitive prompts, proprietary data, and personal information entered into AI tools were silently harvested at scale, exposing users to privacy, IP, and compliance risks.

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

Dec 12, 2025

macOS malware leveraged trusted AI conversations and search ads to trick users into executing malicious commands.

Attackers abused user trust in AI tools and search results to deploy credential-stealing malware without exploiting traditional software vulnerabilities.

7.2

Low

7.2

Medium

7.2

High

7.2

Critical

Dec 10, 2025

State-linked threat actors deployed the BRICKSTORM backdoor to infiltrate VMware hypervisors and management systems for long-term espionage.

The compromise of virtualization layers threatens entire AI and enterprise compute stacks, enabling deep persistence, data theft, and operational disruption across critical infrastructure.

8.2

Low

8.2

Medium

8.2

High

8.2

Critical

Dec 8, 2025

Researchers uncovered 30+ vulnerabilities in AI coding tools enabling prompt-injected insider attacks.

Hijacked AI IDEs could steal secrets, execute arbitrary commands, and compromise enterprise supply chains through developer environments.

8.1

Low

8.1

Medium

8.1

High

8.1

Critical

Dec 8, 2025

A critical vulnerability in React/Next.js (React2Shell) enables remote code execution, now actively exploited by China-linked threat actors.

Unauthenticated remote code execution could compromise servers powering web apps, APIs, and any AI workloads relying on Node.js-based web backends — enabling data theft, supply-chain exposure, or backend compromise.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Nov 26, 2025

Microsoft’s new “agent workspace” AI agents in Windows 11 can be tricked via prompt injection — risking data loss, malware, or unauthorized file access.

Expands attack surface on desktops: background AI agents with file access raise real threat of exploitation or misuse — especially when encountering untrusted content.

6.2

Low

6.2

Medium

6.2

High

6.2

Critical

Nov 19, 2025

Researchers showed that ServiceNow’s AI agents can be manipulated via prompt injection to perform unauthorized actions inside enterprise workflows.

Attackers could craft malicious inputs that cause AI agents to delete records, alter tickets, expose sensitive data, or trigger automated workflows.

6.3

Low

6.3

Medium

6.3

High

6.3

Critical

Nov 19, 2025

Active campaign hijacks exposed Ray clusters via unauthenticated Jobs API (CVE-2023-48022), harvesting GPU/CPU power for cryptomining and spreading across infrastructure.

Massive AI-infrastructure compromise — converts distributed AI clusters into self-propagating botnet, threatens data/model integrity, compute availability, and cross-tenant security.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Nov 19, 2025

A hidden MCP API in Comet Browser allowed embedded extensions to execute local system commands — enabling attackers to hijack entire user devices.

Critical breach of browser and endpoint security; a clear example of how AI-enabled browsers can become full-system attack vectors.

7.9

Low

7.9

Medium

7.9

High

7.9

Critical

Nov 15, 2025

Attackers manipulated an Claude agent in coordinated cyber-espionage campaign, automating most intrusion steps through agentic workflows.

A critical escalation showing AI agents can act as autonomous attackers, enabling rapid, scalable intrusions that traditional AppSec and cloud defenses struggle to detect.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

Nov 14, 2025

Severe vulnerabilities in major AI inference frameworks exposed systems to remote code execution through insecure ZeroMQ and Python pickle mechanisms.

High-severity risk affecting Meta, NVIDIA, Microsoft, and multiple AI toolchains; exploited via widely deployed components across the AI ecosystem.

7.4

Low

7.4

Medium

7.4

High

7.4

Critical

Nov 7, 2025

A new side-channel exploit lets adversaries infer the topics of encrypted chats with language models via packet size and timing analysis.

Calls into question the confidentiality of LLM-based chat and assistant tools — even when using encryption, adversaries with network visibility may deduce sensitive topics and compromise user or enterprise privacy.

6.5

Low

6.5

Medium

6.5

High

6.5

Critical

Nov 5, 2025

Malware families now use AI / LLMs to dynamically rewrite code during runtime, enabling self-modifying, adaptive attacks that evade traditional defenses.

Signals a new phase of cyber threats — AI-augmented, adaptive malware capable of evolving mid-execution, complicating detection, increasing persistence, and threatening enterprise systems globally.

8

Low

8

Medium

8

High

8

Critical

Aug 20, 2025

Compromised Drift OAuth tokens let attackers exfiltrate Salesforce data across hundreds of organizations.

One of the largest SaaS AI supply-chain breaches to date, exposing Salesforce instances, secrets, and customer records at over 700 organizations worldwide.

9.5

Low

9.5

Medium

9.5

High

9.5

Critical

Jul 23, 2025

Replit’s AI coding assistant deleted a live company database during a “vibe-coding” session.

Severe data loss and exposure of systemic risk in autonomous AI coding tools — threatens trust in “AI-assisted development.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Jul 23, 2025

Malicious code in the Amazon Q extension enabled destructive wipe commands through the AI coding agent.

Flaw exposed nearly a million developers to potential file-system and cloud-resource destruction, highlighting severe supply-chain and agentic-tool risks within AI-assisted development environments.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

Jul 12, 2025

A novel attack allows malicious prompts to slip past AI-moderation filters via minimal text changes (e.g. a single character).

Critical weakness in many LLM moderation systems — exposes applications relying on filter-based safety to prompt injection, content abuse, or other misuse at scale.

6.1

Low

6.1

Medium

6.1

High

6.1

Critical

Jul 9, 2025

Over 64 million resumes exposed through misconfiguration of Paradox.ai hiring platform

High reputational damage, limited propagation, large-scale data breach of personal information

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

May 19, 2025

Misconfigured vendor database exposed patient data, revealing how weak controls jeopardize AI-powered healthcare ecosystems.

Extensive PHI and PII exposure threatens identity and medical fraud, with risks amplified across AI systems that rely on compromised healthcare data for analytics and automated decision-making.

7.7

Low

7.7

Medium

7.7

High

7.7

Critical

Feb 28, 2025

Researchers found over 12,000 live API keys, passwords and credentials embedded in publicly archived web data used to train large-language models.

Major supply-chain vulnerability for AI — use of these credentials in training sets threatens cloud and API security across many organizations.

7.1

Low

7.1

Medium

7.1

High

7.1

Critical

Feb 10, 2025

Two ML models on Hugging Face contained malicious code that auto-executed on load, creating reverse-shells.

Demonstrates that open-source model repositories can become vectors for code execution attacks — undermining trust in shared AI supply chain.

6.2

Low

6.2

Medium

6.2

High

6.2

Critical

Dec 13, 2024

Optum left an internal AI “SOP Chatbot” publicly accessible, enabling anyone on the internet to query claim-handling prompts.

Confidential workflows and company SOPs were exposed — a stark warning that even internal-only AI tools can become large-scale exposure points.

4.3

Low

4.3

Medium

4.3

High

4.3

Critical

Sep 24, 2024

A severe container-escape bug in NVIDIA’s Container Toolkit (CVE-2025-23266) can allow GPU containers to break out and take over host systems.

Allows attackers to escalate privileges from a container to root on the host — threatening cloud AI services, shared GPU clusters, and enterprise AI infrastructure.

8.7

Low

8.7

Medium

8.7

High

8.7

Critical