Hardcoded Android Keys Opened Gemini to Anyone on the Wire

Key Takeaways

  • CloudSEK identified 32 Google API keys across 22 Android apps totaling over 500 million installs, including OYO, Google Pay for Business, Taobao, and ELSA Speak.
  • Enabling Gemini on the owning project silently extended AI access to every API key under it, converting low-sensitivity identifiers into live AI credentials.
  • Reported billing spikes include $15,400 for a solo developer, $82,314 in 48 hours for a Mexican team, and roughly $128,000 at a Japanese company.
  • In at least one case involving ELSA Speak, researchers accessed user-uploaded audio files via the Gemini Files API.
  • The root cause is architectural; developers followed Google’s guidance, but platform defaults invalidated the assumed security model.

Summary

CloudSEK research published in April 2026 found 32 Google API keys hardcoded across 22 Android applications with over 500 million combined installs. Once Gemini AI was enabled on the owning project, every key could authenticate to Gemini endpoints. Attackers ran unauthorized inference at victim expense and reached user data, with reported billing spikes reaching around $128,000 at one company.

What We Know

CloudSEK researchers disclosed in April 2026 that 32 Google API keys had been hardcoded into 22 widely distributed Android applications, with a combined install base above 500 million. The affected apps include OYO, Google Pay for Business, Taobao, and ELSA Speak, as reported by SecurityWeek. The keys were originally intended for narrow Google services and were included per Google’s own integration guidance.

The environment shifted when Gemini access was enabled on the owning projects. Turning Gemini on extended access to every key under the project, silently elevating the capability envelope of identifiers previously tied to limited services. Coverage by Infosecurity Magazine noted the shift turned static in-APK identifiers into live AI credentials. Victim developers discovered the exposure through anomalous billing and, in at least one case, evidence that user-uploaded audio had been accessed by outside parties.

What Happened

The incident is a platform-architecture failure rather than a credential-hygiene one. Android developers followed Google’s prescribed pattern, shipping keys in their APKs with standard package signing and SHA-1 restrictions. The intended threat model assumed limited authority. When Gemini was enabled on a project, the platform automatically extended AI scope to every key under it without changing any metadata.

Attackers extracted keys with routine APK inspection and called Gemini inference and file endpoints directly. Package-signature restrictions were meaningless because Gemini endpoints accepted the keys anywhere, and firewall-level IP restrictions failed in at least one case allowing roughly $128,000 of unauthorized inference, as TechRadar reported. A formerly low-sensitivity token became an interactive channel to a frontier AI model that bills by the request.

Why It Matters

When a cloud provider silently extends a legacy identifier’s capability envelope to include frontier AI inference, every downstream system that holds that identifier inherits uncontracted exposure. For enterprise developers, AI-capable keys now require AI-level handling: isolation from shipping artifacts, short lifetimes, and ongoing scope review.

A related MCP credential-exposure incident on the PointGuard tracker shows how a weakly-guarded AI surface is quickly converted into a broader exposure. The exposure of user audio through an AI file API will concentrate regulatory attention on the privacy boundary between AI services and the data they can reach. Business impact is already measurable: small developers have reported startup-ending spikes, and larger teams have paid tens to hundreds of thousands in unauthorized inference.

PointGuard AI Perspective

This exposure fits a pattern PointGuard AI is built to intercept: AI capability creeping into environments that were not designed for it through platform changes outside any one developer’s control. PointGuard AI’s AI security posture management capability gives enterprises a running inventory of active AI services, the credentials that can reach each service, and the data boundaries each credential crosses.

When a provider enables new AI capability on an existing identifier, PointGuard flags the change and surfaces downstream exposure to security, platform, and developer teams in one view. PointGuard’s AI governance layer extends the visibility into policy, so teams can define which credentials may carry AI capability, what rate and data-access boundaries apply, and what review is required before a credential crosses a new AI threshold. Trustworthy AI adoption depends on keeping the identity layer aligned with the capability layer, continuously rather than as a one-time audit.

Incident Scorecard Details

Total AISSI Score: 6.4/10

Criticality = 6, Personal data exposure and cost exposure to victim organizations, AISSI weighting: 25%

Propagation = 7, 22 apps across 500 million installs; methodology reusable against any mobile AI integration, AISSI weighting: 20%

Exploitability = 7, Confirmed abuse with financial harm documented across multiple organizations, AISSI weighting: 15%

Supply Chain = 6, Embedded-credential pattern is platform-agnostic; Google ecosystem exposure, AISSI weighting: 15%

Business Impact = 6, Confirmed billing losses; at least one startup reported destruction; sustained press coverage, AISSI weighting: 25%

Sources

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

6

Propagation

7

Exploitability

7

Supply Chain

6

Business Impact

6

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.