AppSOC is now PointGuard AI

100,000 Prompts Target Gemini in Cloning Attempt

Key Takeaways

  • Coordinated campaign issued over 100,000 prompts
  • Attack targeted proprietary reasoning logic
  • No confirmed user data exposure
  • Highlights rise of AI model extraction threats

Large-Scale Model Extraction Attempt Against Gemini

In February 2026, Google disclosed that attackers submitted more than 100,000 crafted prompts to its Gemini AI model in an effort to extract proprietary reasoning logic. The campaign was detected and blocked before confirmed data loss occurred. The incident highlights the growing risk of AI model cloning attempts.

What We Know

On February 12, 2026, reports surfaced that Google had identified and disrupted a coordinated attempt to extract internal reasoning patterns from its Gemini AI model. According to LatestLY
https://www.latestly.com/technology/google-flags-massive-ai-cloning-attempt-as-over-100000-malicious-prompts-target-gemini-logic-7311552.html

attackers issued more than 100,000 structured prompts designed to probe and reconstruct model behavior.

Additional coverage from Android Authority
https://www.androidauthority.com/google-gemini-clone-attempts-3640480/

indicates the activity was commercially motivated and focused on distillation rather than exploiting a software flaw. The attack relied on large-scale prompt aggregation rather than unauthorized access to backend systems.

Google confirmed that no user data breach occurred. Accounts associated with the activity were disabled, and safeguards were enhanced to limit similar high-volume probing in the future.

This represents one of the largest publicly reported AI model extraction campaigns against a frontier generative model.

What Could Happen

This incident reflects a model extraction attempt, not a traditional breach.

Attackers leveraged legitimate access to systematically query Gemini at scale. By sending extremely high volumes of carefully structured prompts, adversaries attempted to approximate internal reasoning logic through statistical aggregation of responses. The objective was to distill model behavior into a lower-cost replica.

Unlike prompt injection or data exfiltration, no vulnerability or API misconfiguration was disclosed. Instead, the risk stemmed from the open and interactive nature of large language models.

AI systems expose probabilistic reasoning patterns through output responses. When queried at scale, these outputs can be harvested to reconstruct behavioral patterns. High-volume access, if not sufficiently monitored, creates a data collection pipeline suitable for distillation training.

Google’s detection systems flagged anomalous prompt activity and blocked associated accounts before confirmed intellectual property compromise.

Why It Matters

Although no customer data exposure was reported, the business implications are significant.

Frontier models represent substantial investment in research, infrastructure, and proprietary training techniques. Model extraction threatens competitive differentiation and may enable adversaries to reduce development costs by distilling from production systems.

From a governance perspective, this event aligns with concerns outlined in the NIST AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework

regarding model integrity and protection of high-value AI assets.

For enterprises deploying generative AI, this incident reinforces the importance of monitoring AI-specific abuse patterns. Traditional cybersecurity tools may not detect large-scale distillation attempts that operate within legitimate API boundaries.

As AI systems become foundational infrastructure, intellectual property protection becomes a core security priority alongside privacy and compliance.

PointGuard AI Perspective

Model extraction is an AI-native threat that requires AI-native defenses.

PointGuard AI provides continuous model risk monitoring that detects abnormal prompt entropy, volumetric anomalies, and cross-domain probing patterns indicative of distillation campaigns. Instead of relying solely on rate limiting, PointGuard AI analyzes behavioral intent across AI interactions.

Our AI SBOM visibility capabilities help organizations understand where third-party or hosted models are integrated, enabling assessment of exposure to extraction risk across environments.

Policy enforcement features allow security teams to define thresholds for anomalous model usage and automatically initiate containment workflows when extraction-like behavior is detected.

PointGuard AI also supports governance alignment by mapping AI system controls to recognized frameworks and risk management standards.

As organizations accelerate AI adoption, proactive monitoring for cloning and distillation attempts will be essential to protecting innovation, intellectual property, and long-term trust in AI systems.

Incident Scorecard Details

Total AISSI Score: 7.4/10

Criticality = 8, Proprietary frontier model reasoning logic targeted, AISSI weighting: 25%

Propagation = 7, Extraction technique reusable across API-accessible AI systems, AISSI weighting: 20%

Exploitability = 7, Active high-volume campaign confirmed, AISSI weighting: 15%

Supply Chain = 6, Hosted AI service model exposed via public API, AISSI weighting: 15%

Business Impact = 5, No confirmed exploitation or customer harm at time of reporting, AISSI weighting: 25%

Sources

LatestLY
https://www.latestly.com/technology/google-flags-massive-ai-cloning-attempt-as-over-100000-malicious-prompts-target-gemini-logic-7311552.html

Android Authority
https://www.androidauthority.com/google-gemini-clone-attempts-3640480/

NIST AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework

AI Security Severity Index (AISSI)

0/10

Threat Level

Criticality

8

Propagation

7

Exploitability

7

Supply Chain

6

Business Impact

5

Scoring Methodology

Category

Description

weight

Criticality

Importance and sensitivity of theaffected assets and data.

25%

PROPAGATION

How easily can the issue escalate or spread to other resources.

20%

EXPLOITABILITY

Is the threat actively being exploited or just lab demonstrated.

15%

SUPPLY CHAIN

Did the threat originate with orwas amplified by third-partyvendors.

15%

BUSINESS IMPACT

Operational, financial, andreputational consequences.

25%

Watch Incident Video

Subscribe for updates:

Subscribe

Ready to get started?

Our expert team can assess your needs, show you a live demo, and recommend a solution that will save you time and money.