AppSOC is now PointGuard AI

AI Security Incident Tracker

Categories
Sort by

Incident

Date Reported

Summary

Impact

Severity Score

Windows 11 Agentic AI Features Raise Major Security Risks

November 26, 2025

Microsoft’s new “agent workspace” AI agents in Windows 11 can be tricked via prompt injection — risking data loss, malware, or unauthorized file access.

Expands attack surface on desktops: background AI agents with file access raise real threat of exploitation or misuse — especially when encountering untrusted content.

6.2

Low

6.2

Medium

6.2

High

6.2

Critical

ServiceNow AI Agents Can Be Tricked Into Harmful Actions

November 19, 2025

Researchers showed that ServiceNow’s AI agents can be manipulated via prompt injection to perform unauthorized actions inside enterprise workflows.

Attackers could craft malicious inputs that cause AI agents to delete records, alter tickets, expose sensitive data, or trigger automated workflows.

6.3

Low

6.3

Medium

6.3

High

6.3

Critical

ShadowRay 2.0: Global AI-Infrastructure Botnet via Ray Flaw

November 19, 2025

Active campaign hijacks exposed Ray clusters via unauthenticated Jobs API (CVE-2023-48022), harvesting GPU/CPU power for cryptomining and spreading across infrastructure.

Massive AI-infrastructure compromise — converts distributed AI clusters into self-propagating botnet, threatens data/model integrity, compute availability, and cross-tenant security.

7.8

Low

7.8

Medium

7.8

High

7.8

Critical

Comet Browser MCP Flaw: Device Takeover via AI-Browser API (“CometJacking”)

November 19, 2025

A hidden MCP API in Comet Browser allowed embedded extensions to execute local system commands — enabling attackers to hijack entire user devices.

Critical breach of browser and endpoint security; a clear example of how AI-enabled browsers can become full-system attack vectors.

7.9

Low

7.9

Medium

7.9

High

7.9

Critical

Anthropic Breach: First Large-Scale Agentic AI Cyberattack

November 15, 2025

Attackers manipulated an Claude agent in coordinated cyber-espionage campaign, automating most intrusion steps through agentic workflows.

A critical escalation showing AI agents can act as autonomous attackers, enabling rapid, scalable intrusions that traditional AppSec and cloud defenses struggle to detect.

8.3

Low

8.3

Medium

8.3

High

8.3

Critical

ShadowMQ: AI Framework Vulnerabilities Expose Inference Platforms

November 14, 2025

Severe vulnerabilities in major AI inference frameworks exposed systems to remote code execution through insecure ZeroMQ and Python pickle mechanisms.

High-severity risk affecting Meta, NVIDIA, Microsoft, and multiple AI toolchains; exploited via widely deployed components across the AI ecosystem.

7.4

Low

7.4

Medium

7.4

High

7.4

Critical

Whisper Leak Side-Channel Attack on Remote LLMs

November 7, 2025

A new side-channel exploit lets adversaries infer the topics of encrypted chats with language models via packet size and timing analysis.

Calls into question the confidentiality of LLM-based chat and assistant tools — even when using encryption, adversaries with network visibility may deduce sensitive topics and compromise user or enterprise privacy.

6.5

Low

6.5

Medium

6.5

High

6.5

Critical

AI-Powered Malware That Morphs Mid-Attack in the Wild

November 5, 2025

Malware families now use AI / LLMs to dynamically rewrite code during runtime, enabling self-modifying, adaptive attacks that evade traditional defenses.

Signals a new phase of cyber threats — AI-augmented, adaptive malware capable of evolving mid-execution, complicating detection, increasing persistence, and threatening enterprise systems globally.

8

Low

8

Medium

8

High

8

Critical

AI Supply Chain Failure Breaches Salesforce Accounts of 700 Enterprises

August 20, 2025

Compromised Drift OAuth tokens let attackers exfiltrate Salesforce data across hundreds of organizations.

One of the largest SaaS AI supply-chain breaches to date, exposing Salesforce instances, secrets, and customer records at over 700 organizations worldwide.

9.5

Low

9.5

Medium

9.5

High

9.5

Critical

DELETE Happens: Replit AI Coding Tool Wipes Production Database

July 23, 2025

Replit’s AI coding assistant deleted a live company database during a “vibe-coding” session.

Severe data loss and exposure of systemic risk in autonomous AI coding tools — threatens trust in “AI-assisted development.

5.5

Low

5.5

Medium

5.5

High

5.5

Critical

Amazon Q Coding Agent Compromised with Wiper Commands

July 23, 2025

Malicious code in the Amazon Q extension enabled destructive wipe commands through the AI coding agent.

Flaw exposed nearly a million developers to potential file-system and cloud-resource destruction, highlighting severe supply-chain and agentic-tool risks within AI-assisted development environments.

7.3

Low

7.3

Medium

7.3

High

7.3

Critical

TokenBreak: Single-Character Prompt Manipulation Bypasses AI Safety Filters

July 12, 2025

A novel attack allows malicious prompts to slip past AI-moderation filters via minimal text changes (e.g. a single character).

Critical weakness in many LLM moderation systems — exposes applications relying on filter-based safety to prompt injection, content abuse, or other misuse at scale.

6.1

Low

6.1

Medium

6.1

High

6.1

Critical

McDonald’s AI Security Breach: 64 Million Resumes Served

July 9, 2025

Over 64 million resumes exposed through misconfiguration of Paradox.ai hiring platform

High reputational damage, limited propagation, large-scale data breach of personal information

7.6

Low

7.6

Medium

7.6

High

7.6

Critical

12,000+ API Keys and Passwords Exposed in AI Training Data

February 28, 2025

Researchers found over 12,000 live API keys, passwords and credentials embedded in publicly archived web data used to train large-language models.

Major supply-chain vulnerability for AI — use of these credentials in training sets threatens cloud and API security across many organizations.

7.1

Low

7.1

Medium

7.1

High

7.1

Critical

Malicious ML Models Discovered on Hugging Face

February 10, 2025

Two ML models on Hugging Face contained malicious code that auto-executed on load, creating reverse-shells.

Demonstrates that open-source model repositories can become vectors for code execution attacks — undermining trust in shared AI supply chain.

6.2

Low

6.2

Medium

6.2

High

6.2

Critical

Optum Accidentally Exposed Internal AI Chatbot to the Internet

December 13, 2024

Optum left an internal AI “SOP Chatbot” publicly accessible, enabling anyone on the internet to query claim-handling prompts.

Confidential workflows and company SOPs were exposed — a stark warning that even internal-only AI tools can become large-scale exposure points.

4.3

Low

4.3

Medium

4.3

High

4.3

Critical

NVIDIAScape: Critical Container Escape in Toolkit Puts Cloud AI at Host-Takeover Risk

September 24, 2024

A severe container-escape bug in NVIDIA’s Container Toolkit (CVE-2025-23266) can allow GPU containers to break out and take over host systems.

Allows attackers to escalate privileges from a container to root on the host — threatening cloud AI services, shared GPU clusters, and enterprise AI infrastructure.

8.7

Low

8.7

Medium

8.7

High

8.7

Critical