ServiceNow “BodySnatcher” AI Platform Vulnerability (CVE-2025-12420)
Key Takeaways
- A critical vulnerability (CVE-2025-12420) affected ServiceNow’s AI Platform
- Unauthenticated attackers could impersonate legitimate users
- Flaw impacted Now Assist AI Agents and Virtual Agent API components
- Fixes were deployed October 30, 2025, but risk remains if unpatched
ServiceNow AI Platform Vulnerability: Unauthenticated Impersonation Risk
On January 13, 2026, multiple cybersecurity outlets reported that ServiceNow patched a critical vulnerability in its AI Platform that could enable unauthenticated attackers to impersonate legitimate users and execute arbitrary actions. The flaw was tracked as CVE-2025-12420, carrying a CVSS score of 9.3, and was described by some researchers as one of the most severe AI vulnerabilities discovered to date. (The Hacker News)
What Happened: Incident Overview
Security firm AppOmni discovered the vulnerability in October 2025 and disclosed it to ServiceNow. The weakness, nicknamed “BodySnatcher,” stemmed from authentication bypass paths in the Now Assist AI Agents and Virtual Agent API that allowed an unauthenticated threat actor to impersonate any user given only an email address. This bypass could allow an attacker to perform operations permitted to the impersonated user, including modifying records, copying or exfiltrating sensitive data, and potentially escalating privileges within a ServiceNow instance.
ServiceNow released a security patch on October 30, 2025, deploying fixes to most hosted environments and publishing updates for partners and self-hosted deployments. Affected versions of the AI components included older releases of Now Assist AI Agents (sn_aia) and the Virtual Agent API (sn_va_as_service).
While there is no public evidence of exploitation in the wild prior to patching, researchers warn that the window between disclosure and exploitation can be brief, and many enterprise environments lag in applying critical patches. (CyberScoop)
How the Vulnerability Happened
The vulnerability arose from flawed access control and account-linking logic in ServiceNow’s AI Layer. The affected AI components trusted contextual identifiers such as simple email addresses and hardcoded platform secrets during agent execution flows. This combination allowed unauthenticated actors to impersonate users and trigger AI workflows with elevated privileges. (AppOmni)
Additionally, the design of agent discovery workflows and default configurations increased attack surface, enabling second-order prompt injection and agent-to-agent escalation scenarios under certain conditions. These behaviors highlighted the risk of agentic AI components that trust internal context and inter-agent communications without robust verification. (Cyber Security News)
This class of flaw demonstrates that AI integration layers and conversational agent APIs can inadvertently act as privileged execution paths if authentication and access controls are not rigorously enforced.
Impact: Why It Matters
ServiceNow is a core enterprise workflow automation platform used by the majority of Fortune 500 organizations. Its AI capabilities are deeply integrated into IT service management, HR workflows, customer service, and security operations. A critical vulnerability in these AI layers therefore represents a major enterprise attack surface, offering potential lateral movement opportunities for attackers and risks to sensitive operational data. (Dark Reading)
Even without confirmed exploitation, the vulnerability underscores the challenges organizations face in securing AI components embedded within essential business tooling. Unauthenticated impersonation could lead to data exposure, unauthorized process changes, and privilege escalation within enterprise environments if not properly mitigated.
PointGuard AI Perspective
From the PointGuard AI perspective, the ServiceNow BodySnatcher vulnerability highlights the need for security-aware AI governance and continuous validation of agentic AI components in enterprise systems.
Vendor-provided AI features must be integrated with enterprise identity, authentication, and authorization controls rather than treated as standalone automation layers. Security teams should enforce strict access validation and telemetry monitoring around AI workflows, especially those that can modify state or trigger business actions.
Continuous AI security assessments should include:
- Analysis of AI agent privilege boundaries and trust assumptions
- Review of implicit execution paths exposed via conversational interfaces
- Detection of unauthorized or anomalous agent behavior
This incident also reinforces that patching alone is not sufficient. Organizations must evaluate default AI configurations, enforce human-in-the-loop verification for sensitive operations, and apply runtime monitoring to detect exploitation attempts as AI features are adopted at scale.
Incident Scorecard Details
Total AISSI Score: 8.7/10
Criticality = 9.0, Unauthenticated impersonation of AI platform users
Propagation = 8.5, Potential broad impact across enterprise instances
Exploitability = 9.0, No authentication needed for initial exploitation
Supply Chain = 6.0, Enterprise SaaS platform risk for many customers
Business Impact = 8.5, Potential operational and data compromise
Sources
- Dark Reading: ‘Most Severe AI Vulnerability to Date’ Hits ServiceNow — January 13, 2026 (Dark Reading)
- The Hacker News: ServiceNow patches critical AI platform flaw (CVE-2025-12420) (The Hacker News)
- CyberScoop: ServiceNow fixes critical AI vulnerability advisory (CyberScoop)
- AppOmni blog on BodySnatcher vulnerability (researcher analysis) (AppOmni)
