AppSOC is now PointGuard AI

Salesloft Breach: Why AI Agents Need Runtime Protection

One flaw can spread through connected systems if AI integrations go unprotected

Salesloft Breach: Why AI Agents Need Runtime Protection

The recent breach involving Salesloft’s Drift chatbot is a wake-up call. AI-powered integrations are no longer passive tools; they are active conduits into your most sensitive systems. From Salesforce to Slack, OpenAI to Azure, the stolen authentication tokens did not just expose data; they exposed trust.

This was not a vulnerability in the platforms themselves. It was a failure to govern how AI agents interact with them. At PointGuard AI, we believe security must move upstream into the runtime behavior of GenAI and agentic systems. Our platform enforces real-time guardrails, monitors cross-platform interactions, discovers such shared secrets, and contains AI workflows before they become lateral movement vectors.

What Happened: The Drift Breach Deep Dive

In early August 2025, attackers exploited a third-party integration between Salesloft’s Drift chatbot and Salesforce via a tool called SalesDrift. Over approximately ten days, threat actors believed to be the UNC6395 group stole OAuth and refresh tokens tied to the Drift-Salesforce connection. These tokens granted them wide-reaching API access to customer environments.

Using these “skeleton keys,” the attackers exfiltrated sensitive data including business contacts, support cases, product and account records, and even AWS, Snowflake, and VPN access credentials from hundreds of enterprises.

Scope expanded dramatically when Google’s Threat Intelligence Group confirmed that the breach extended beyond Salesforce. Drift’s integrations with Google Workspace and other third-party tools were also impacted. Customers were advised to treat all Drift-related tokens as compromised.

In response, Salesloft and Salesforce disabled the Drift integrations and began a coordinated remediation effort. This included token revocation, forced rotation of compromised credentials, and broad advisories to affected organizations.

Who Was Affected and What It Means

The scale of impact was extraordinary. Among the companies affected were some of the most trusted names in cybersecurity and cloud infrastructure:

  • Zscaler: OAuth tokens tied to Drift were stolen, exposing customer contact details, support cases, and license information.
  • Cloudflare: Reported exposure of customer contact data and 104 API tokens, which were proactively rotated.
  • Palo Alto Networks: Confirmed that business contact records, sales data, and internal support cases were compromised.
  • Tanium, SpyCloud, Proofpoint, and Tenable: Each disclosed that sensitive Salesforce CRM records and support case data were accessed.
  • Google Workspace customers: Google’s Threat Intelligence Group confirmed Drift-linked integrations extended the breach beyond Salesforce, affecting Workspace accounts as well.

The exposure of such data across these firms has sweeping consequences. For enterprises entrusted with securing others, even limited exposure creates cascading risks. Attackers can weaponize the stolen data for spear phishing, identity compromise, and lateral movement into customer environments. For customers of these vendors, the breach erodes trust not only in Drift and Salesloft, but in the entire ecosystem of AI-powered integrations.

Why This Breach Shows AI Agents Magnify Security Risks

1. Lack of Visibility and Governance

This breach originated from an AI integration, an agentic system that could connect to various platforms and escalate privileges autonomously. Visibility into such systems was lacking. Shadow integrations like Drift were not fully monitored or governed, highlighting the blind spots intrinsic to many organizations.

2. Agents Increase the Stakes

AI agents operate at machine speed and often carry broad entitlements. Once compromised, they can execute actions before a human even knows. In this case, stolen tokens allowed exfiltration and lateral movement without human triggers or alerts.

3. Wider and Faster Impact

This was not a targeted intrusion; it was a systemic supply-chain style breach. Instead of one victim, hundreds were affected nearly simultaneously. Traditional controls lag in the face of AI-driven propagation that is automated, stealthy, and scalable.

Implications for AI Security

  1. Traditional Controls Are Not Enough
    OAuth-based, deeply networked AI integrations demand runtime guardrails that detect and govern behaviors in real time, not just at configuration time.
  2. Trust Boundaries Must Be Enforced
    Trusted integrations like Drift should be bounded by enforcement layers such as limited scopes, token lifecycle management, and anomaly detection.
  3. Incident Response Must Include Agent Context
    Investigations must trace through agent workflows, token issuance, and cross-system calls, requiring visibility into AI agent behavior.

How PointGuard AI Can Help

When AI Integrations Become Attack Surfaces: Why GenAI Needs Runtime Governance
The recent breach involving Salesloft’s Drift chatbot is a wake-up call. AI-powered integrations are no longer passive tools; they are active conduits into your most sensitive systems. From Salesforce to Slack, OpenAI to Azure, the stolen authentication tokens did not just expose data; they exposed trust.

This was not a vulnerability in the platforms themselves. It was a failure to govern how AI agents interact with them. At PointGuard AI, we believe security must move upstream into the runtime behavior of GenAI and agentic systems. Our platform enforces real-time guardrails, monitors cross-platform interactions, discovers such shared secrets, and contains AI workflows before they become lateral movement vectors.

Agentic Security Through PointGuard AI:

  • AI and Agent Discovery: Automatically detect shadow AI agents and hidden integrations across your enterprise before adversaries can exploit them.
  • Red Teaming for LLMs and Agents: Simulate prompt injection, token abuse, and lateral-moving strategies to reveal weaknesses before attackers use them.
  • MCP Control: Enforce least-privilege token scopes, automatically rotate and revoke at-risk credentials, and limit agent reach across systems.
  • Runtime Guardrails: Monitor agent behavior in real time, detect anomalous queries, data exfiltration patterns, and unauthorized cross-service activity.

By governing how AI agents operate at discovery, test, token lifecycle, and runtime levels, PointGuard AI empowers enterprises to secure autonomous systems before they become breach vectors.

Conclusion

The Salesloft Drift breach underscores a fundamental shift: AI agents are now first-class security considerations. Autonomous actions, blended integrations, and tokenized permissions turn agents into potent bridges into your systems. Without visibility, governance, and real-time enforcement, one compromised integration can cascade into enterprise-wide exposure.

PointGuard AI closes that gap by securing AI agents across their lifecycle, preventing blind-spot risk, enabling true runtime governance, and making sure that innovation does not come at the cost of security.