AppSOC is now PointGuard AI

OmniGPT Breach Exposes Millions of AI Conversations

Key Takeaways

  • OmniGPT—a popular AI chatbot aggregator—reported a breach with large data exposure
  • Sensitive user emails, phone numbers, session details, and chat logs were leaked
  • API keys, credentials, and links to user-uploaded files were included
  • Highlights data security challenges in AI-powered platforms

OmniGPT Data Breach: What Happened

In early February 2025, reports surfaced that OmniGPT, an AI aggregator that lets users interact with multiple LLMs such as ChatGPT-4, Claude 3.5, Gemini, and others, suffered a significant breach. A threat actor using the alias “Gloomer” posted samples of stolen data on a hacking forum, claiming to have leaked information linked to approximately 30,000 users and over 34 million lines of chat messages. (SecureWorld)

According to these reports, the leaked dataset included personal contact information such as email addresses and phone numbers, extensive conversation logs generated between users and the AI models, and documented references to API and authentication request details that could include API keys and credentials. (cpomagazine.com)

How the Breach Happened

The exact method of compromise has not been officially confirmed by OmniGPT, and the company has not publicly acknowledged the incident. However, security reporting indicates that the data was procured and posted for sale after being posted on a notorious illicit forum, suggesting a backend vulnerability was exploited, or internal systems were inadequately secured. (CSO Online)

Cybersecurity observers noted that the leaked information included links to user-uploaded files, indicating that not just metadata but actual content stored on OmniGPT’s infrastructure might have been exposed.

Why It Matters

If verified, this breach would represent one of the largest data exposures involving conversation logs from an AI platform to date, with multiple serious implications:

  • Privacy & PII Exposure: Leaked email addresses and phone numbers create phishing and identity theft opportunities. (Hackread)
  • Credential Risk: API keys, session tokens, and authentication data visible in transcripts could allow credential misuse or unauthorized access to other services. (Daily Security Review)
  • Sensitive Conversation Exposure: Millions of chat messages may contain personal, business, or confidential content, posing privacy and compliance risks.
  • Regulatory & Legal Implications: Massive data leaks can trigger penalties under data protection laws like GDPR if user data was stored without sufficient safeguards.

The incident underscores that AI platforms with deep access to personal and enterprise data must adopt robust security controls and transparent breach reporting practices.

PointGuard AI Perspective

From the PointGuard AI standpoint, the OmniGPT breach highlights the critical need for comprehensive security controls around AI platforms that store, process, and transmit personal or sensitive data.

Traditional application security practices—such as strong authentication, encryption at rest and in transit, secure API handling, logging, anomaly detection, and rigorous access controls—must be extended and tailored to AI ecosystems where user interactions themselves generate vast volumes of sensitive content.

PointGuard AI emphasizes continuous monitoring, automated incident detection, and rigorous data governance controls to ensure that AI systems do not become inadvertent repositories of sensitive data without appropriate safeguards.

This event reinforces that securing AI systems is not optional—especially for platforms aggregating multiple models and user contexts—and should be treated with the same seriousness as any other data-centric application in an enterprise.

Incident Scorecard Details

Total AISSI Score: 7.8/10

Criticality = 8.0
Large potential exposure of PII and sensitive chat content.

Propagation = 7.0
Exposed data includes credentials that could be reused elsewhere.

Exploitability = 7.5
Data appears to have been accessed by a threat actor and put up for sale.

Supply Chain = 5.0
Unclear whether third-party components were involved.

Business Impact = 8.5
Exposure of private conversations and credentials undermines trust and could have legal consequences.

Sources

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

8

Propagation

7

Exploitability

7.5

Supply Chain

5

Business Impact

8.5

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.