AppSOC is now PointGuard AI

"Living off AI" - the Latest Security Concern

New attack techniques demonstrated to infiltrate AI systems without malware

"Living off AI" - the Latest Security Concern

"Living off the land" is a long-established tactic in cybersecurity. Rather than deploying malware that triggers alerts, attackers exploit flaws in legitimate tools and code already present in an environment. Now, this concept is evolving. In AI systems, attackers are beginning to "live off AI" — leveraging built-in AI capabilities to conduct exploits without introducing new, easily detectable code.

How "Living off AI" Works

AI systems today are integrated into countless applications, from customer service chatbots to intelligent automation tools. These systems are often driven by large language models (LLMs) that interpret user input and take automated actions across connected applications.

Recent research by Cato Networks uncovered a troubling example. They analyzed Atlassian’s AI agent protocol (MCP), designed to automate IT support tickets. By injecting a malicious prompt into a support ticket, an attacker could cause the AI agent to carry out unintended actions—all through legitimate interfaces. This mirrors "living off the land" techniques: using allowed pathways and tools for malicious ends.

Such attacks aren’t limited to Atlassian’s ecosystem. Any AI-driven workflow that processes user-generated content is a potential target. If attackers can craft inputs that AI systems interpret incorrectly or dangerously, they can trigger unauthorized actions, data exfiltration, or service disruptions.

A New Generation of Threats

"Living off AI" introduces a new layer of complexity to enterprise security:

  1. AI agents as an attack surface: AI models are dynamic, context-driven, and capable of unpredictable responses. Traditional security tools designed for static applications may fail to detect malicious prompt injection or AI abuse.
  2. Supply chain risks: Many organizations adopt third-party AI capabilities through APIs or marketplaces. These integrations can introduce hidden vulnerabilities, as seen with the Atlassian MCP example.
  3. Data poisoning: AI models can be manipulated over time if attackers subtly alter training data or input streams, shaping AI behavior for future exploits—an issue recently highlighted in research from Florida International University.
  4. Invisibility: Because these attacks don’t rely on foreign executables or traditional malware, they evade many existing detection mechanisms.

Why AI Security Requires a Collaborative Approach

Experts warn that AI alone isn’t ready to autonomously defend itself. As one analyst noted in SC Media, a collaborative security strategy is essential. Security teams must adapt their defenses to include AI-aware monitoring, prompt hygiene, model hardening, and continuous testing against emerging attack techniques.

AI governance also plays a critical role. Organizations need visibility into which AI models are in use, what data they consume, and how they behave—an area where many enterprises still lack maturity.

How PointGuard AI Can Help

PointGuard AI offers comprehensive protection against prompt injection and "living off AI" attacks, as well as broader AI security risks. Our solutions include:

  • AI Discovery: Gain full visibility into the AI models and workflows operating across your environment, including shadow AI.
  • AI Hardening: Implement controls to sanitize inputs, enforce prompt hygiene, and minimize attack surfaces.
  • AI Red Teaming: Proactively test AI systems against known and emerging threat vectors, including prompt injection scenarios.
  • AI Detection & Response: Continuously monitor AI activity, detect anomalous behavior, and respond to threats in real time.
  • Data Protection: Safeguard sensitive information from AI-driven leaks or misuse.
  • Full-stack AI security: Secure the entire AI stack—from models to underlying applications—ensuring robust, enterprise-grade protection.

Additionally, PointGuard AI partners with Atlassian and provides an integrated solution available through the Atlassian Marketplace, enabling seamless AI security for Atlassian’s ecosystem.

As "living off AI" threats emerge, enterprises must act swiftly to protect their AI investments and digital ecosystems. PointGuard AI helps organizations stay ahead of these evolving risks—enabling secure, trusted AI adoption.