January 2026 delivered a steady cadence of serious incidents affecting AI systems, agents, tools, and protocols. This recap is based on the PointGuard AI Security Incident Tracker, compiled by our Research Labs to surface real-world AI security failures and attacks across models, tooling, and autonomous workflows. (pointguardai.com)
Each incident on the tracker is evaluated using the AI Security Severity Index (AISSI), an industry-informed scoring framework that captures exploitability, propagation, business impact, supply chain risk, and criticality.
Here’s what January’s data revealed — and why these trends matter.
A Month of High-Impact AI Security Events
In January, multiple incidents scored at or above 7.0 on the AISSI scale, reflecting serious risk to organizations adopting AI. Key examples include:
- ServiceNow “BodySnatcher” AI Platform Vulnerability (CVE-2025-12420) – Critical enterprise AI flaw enabling unauthenticated arbitrary actions (AISSI ~8.7).
- Microsoft Copilot “Reprompt” Attack – Prompt manipulation risk leading to session hijack and data exfiltration (AISSI ~8.3).
- MCP Without Guardrails Leaves Clawdbot Exposed – Unauthenticated MCP endpoints exposed agents to takeover and credential leakage (AISSI ~8.1).
- Typebot Credential Theft Trick (CVE-2025-65098) – Credential and API key theft through bot preview (AISSI ~7.6).
- Chainlit Chains Break, Sensitive Data Can Slip Out – Multiple vulnerabilities enabling arbitrary file access and SSRF (AISSI ~7.3).
- Git Happens: MCP Flaws Open Door to Code Execution – Anthropic MCP Git server flaws enabling unauthorized access and execution (AISSI ~7.2).
- 5ire MCP Vulnerability (CVE-2026-22792) – Unsafe MCP client rendering leading to code execution risk (AISSI ~7.2).
Incidents with AISSI scores below 7.0 were also documented (e.g., prompt injection exposing calendar data), and while important, they tended to indicate narrower impact or require more specific conditions.
Trend 1: Significant Increase in Threat Volume and Severity
Compared to preceding months, January’s incident list reflects more frequent and more impactful AI security failures. A variety of organizations, from major enterprise platforms to open-source frameworks, appeared in published incident reports. This rising frequency signals that attackers are pivoting from curiosity-level experimentation to systematic exploitation of AI-centric workflows.
Moreover, the breadth of impacted technologies — from enterprise bots to open-source MCP services — shows that the risk surface is expanding faster than many teams expected.
Trend 2: MCP Usage Surges Alongside New Threat Class
One of the most noteworthy patterns in January is the prevalence of Model Context Protocol (MCP)-related incidents. The MCP is rapidly becoming a foundational mechanism for integrating AI agents with tools, repositories, and automation services. But with adoption has come exposure.
Several MCP incidents this month scored above 7.0:
- Clawdbot MCP exposure
- JamInspector MCP control-plane flaw — exposing control paths that could be manipulated (AISSI ~7.3).
- Anthropic MCP Git server flaws
These events illustrate that MCP components can become high-impact attack vectors — and without guardrails, they can enable credential theft, unauthorized code execution, or agent manipulation. MCP threats now span configuration errors to outright control flips, underscoring the need for authentication, authorization, and runtime inspection in MCP ecosystems.
Trend 3: Prompt Injection Still Matters — But In New Contexts
Prompt injection vulnerabilities continue to show up, but increasingly as part of multi-stage attack vectors rather than standalone weaknesses.
The Microsoft Copilot Reprompt attack is a prime example. By manipulating prompt parameters in URLs, attackers could hijack sessions and siphon data across interactions.
Additionally, lower-severity prompt cases in the tracker (e.g., calendar exposure) remind us that indirect prompt injection remains a real risk when AI systems interact with structured or external content. Prompt manipulation still functions as a lever attackers use to influence agent actions or tool invocations, especially when combined with broader protocol weaknesses.
Trend 4: Risk Extends Beyond Models to Toolchains and Frameworks
Another striking pattern in January’s incidents is that most threats originated outside the model itself:
- Framework vulnerabilities, such as in Chainlit, allowed file access and outbound requests.
- Client rendering flaws in MCP tools enabled local code execution risk.
- Credential theft mechanisms in bot previews stole API keys and tokens.
This reflects a broader reality: AI security is no longer just about model integrity or prompt safety. It spans:
- Toolchains used to automate workflows
- Protocols that bridge models with code and repositories
- Client interfaces that execute or render outputs
- Credential and key management associated with AI service access
In many cases, traditional AppSec categories such as authentication, authorization, input validation, and access control are still at the heart of AI incidents — but amplified by AI’s automation and autonomy.
Final Thoughts
January 2026 showed us that:
- AI systems are now targeted across multiple layers — from enterprise platforms to open-source tooling.
- MCP-related threats have emerged as a major class of concern, reflecting the growing attack surface as agent frameworks proliferate.
- Prompt injection remains relevant, especially when tied to data exfiltration or session hijacking techniques.
- Most high-severity incidents stem from integrations, toolchains, and protocol weaknesses, not just language model behavior.
As the threat landscape evolves, defenders must broaden their view of AI security to include not only models, but also the surrounding ecosystem, protocols, and execution environments where AI systems live and act.
Staying ahead requires real-world incident data, strong severity measurement like AISSI, and proactive defenses that understand AI as an integrated stack — not a black box.




