LiteLLM Token Collision Weakens Authentication (CVE-2026-35030)
Key Takeaways
- LiteLLM used only the first 20 characters of a token for an OIDC cache key
- In some deployments, crafted collisions could let attackers inherit cached identity data
- The issue requires JWT authentication with OIDC caching enabled
- The flaw was fixed in version 1.83.0
A small cache design choice creates a bigger identity risk
LiteLLM disclosed an authentication bypass vulnerability tied to how OIDC userinfo cache keys were generated. In affected deployments, a crafted token collision could allow an attacker to impersonate another user and gain unauthorized access. (GitHub)
What We Know
According to the GitHub Advisory for CVE-2026-35030, LiteLLM used token[:20] as the cache key for OIDC userinfo when JWT authentication was enabled. That meant an attacker could generate a different token sharing the same leading characters as a legitimate one and potentially inherit cached identity information. The NVD entry confirms the same core mechanism and notes that the issue affects versions earlier than 1.83.0.
LiteLLM’s own April 2026 security hardening update says most deployments are not affected because the vulnerable JWT mode is off by default, but it also confirms the fix and recommended mitigation steps. That narrows the exposure without eliminating the seriousness of the design flaw. (LiteLLM)
What Could Happen
This is a traditional identity-handling weakness landing in a central AI gateway. In affected deployments, unauthorized users could be treated as authenticated users, potentially gaining access to model traffic, logs, or administrative functions tied to the cached identity.
The AI-specific issue is concentration of access. A gateway often fronts multiple providers, applications, and teams. So even when the condition set is narrower than in the LiteLLM RCE issue, the authentication flaw still matters because it sits at a shared boundary. A small cache-key shortcut in a central component can create outsized trust failures. (GitHub)
Why It Matters
Authentication bugs are always serious, but in AI infrastructure they can be especially hard to detect because access layers are often optimized for speed, observability, and cost control rather than security review. This incident shows how ordinary engineering shortcuts can undermine trust across a multi-model environment.
It also reinforces a broader pattern: AI gateways should be treated as critical infrastructure. Even where the vulnerable mode is not enabled by default, organizations need clear visibility into which authentication features are active and how identity state is cached, reused, and monitored. (GitHub)
PointGuard AI Perspective
Identity and configuration weaknesses in AI gateways are difficult to manage when teams lack a full picture of where AI services are deployed. PointGuard AI’s AI Discovery helps organizations find AI gateways, models, agents, and supporting services that might otherwise escape normal review. (PointGuard AI)
PointGuard AI’s AI Security Posture Management helps prioritize risks across AI components and surface weak points before they become incidents, especially in shared infrastructure layers. (PointGuard AI)
And because identity flaws often become governance issues as well as security issues, AI Governance helps teams apply policy and evidence collection across AI systems in regulated or high-risk environments. (PointGuard AI)
Incident Scorecard Details
Total AISSI Score: 5.9/10
Criticality = 7, shared AI gateway identity control can expose sensitive access paths, AISSI weighting: 25%
Propagation = 6, limited by configuration but still affects centralized access, AISSI weighting: 20%
Exploitability = 5, public technique is documented though broad abuse is not confirmed, AISSI weighting: 15%
Supply Chain = 8, external AI gateway dependency increases organizational exposure, AISSI weighting: 15%
Business Impact = 4, no confirmed material harm but meaningful unauthorized-access potential, AISSI weighting: 25%
Sources
