Subscribe to PointGuard AI
See our latest blogs, videos, and expert commentary on security issues and trends.
SanJose, CA — February 2026 — PointGuard AI today announced the launch of its AISecurity Incident Tracker, a public resource developed by the PointGuard AIResearch Lab to monitor, document, and analyze major AI-related securityincidents affecting enterprises, technology providers, and critical AIinfrastructure.
AsAI adoption accelerates across enterprise applications, attackers areexploiting a rapidly expanding attack surface. Agentic AI, autonomous agents,and orchestration layers such as MCP are significantly increasing securityexposure by introducing new tools, permissions, and dynamic execution paths.
“There’s a lot of noise around AI security, but fewresources that consistently track and compare incidents using a structuredmethodology,” saidPravin Kothari, CEO of PointGuard AI. “We built this tracker to bringclarity and context to real-world AI threats.”
The PointGuard AI Research Lab collaborateswith enterprise CISOs, security practitioners, industry experts, and technologypartners to validate incidents and refine its methodology. The tracker focusesstrictly on documented incidents and demonstrated vulnerabilities, supported bycredible third-party sources such as NVD, MIT AI Risk Initiative, CornellarXiv, GitHub, and the AI Incident Database.
To date, the Lab has documented nearly 80significant AI-related security incidents across 2025 and 2026, with more thanhalf occurring in the first 90 days of 2026. Incidents span major platformsincluding OpenClaw/Moltbook, Anthropic Claude, Microsoft Copilot, GoogleGemini, ServiceNow, and Salesforce.
For each incident, there is analysis of whathappened, how the breach unfolded, and mitigation guidance, with explainervideos for major cases. The tracker covers emerging risks including promptinjection, MCP and agentic vulnerabilities, AI coding and framework flaws,supply chain exposure, data leaks, credential theft, and model compromise.
To enable consistent comparison, the Labintroduced the AI Security Severity Index (AISSI), a 0–10 scoring system basedon weighted factors including Criticality, Propagation, Exploitability, SupplyChain, and Business Impact.
“Agentic AI is rapidly expanding the attack surface,” saidKothari. “We designed this tool to help enterprises stay ahead of realincidents and protect their AI systems with confidence.”
The PointGuard AI Research Lab welcomessuggestions for incidents to include and feedback on methodology. To explorethe tracker, and subscribe for updates visit:https://www.pointguardai.com/ai-security-incident-tracker
