Graphiti’s Memory Graph Becomes an Agent Attack Path
Key Takeaways
- Graphiti before version 0.28.2 contained a Cypher injection flaw in search-filter construction.
- In MCP deployments, prompt injection could steer an LLM client into calling the vulnerable function with attacker-controlled values.
- Successful exploitation could enable unauthorized query execution, data access, tampering, or deletion.
- The issue highlights how agent memory layers and orchestration components can expand AI attack surface.
Graphiti flaw exposes agent memory to injection risk
Graphiti, a framework for building and querying temporal context graphs for AI agents, disclosed CVE-2026-32247 after researchers found unsafe construction of Cypher search filters in versions before 0.28.2. The issue matters because the same flaw could be reached in MCP-style agent deployments through direct untrusted access or prompt injection that induced an LLM client to call the vulnerable function. See the GitHub Advisory Database entry, the NVD record, and PointGuard AI’s broader AI Security Incident Tracker. (GitHub)
What We Know
Graphiti describes itself as a framework for building and querying temporal context graphs for AI agents, which means it sits close to memory, retrieval, and agent orchestration workflows. According to the GitHub advisory and the NVD entry, versions before 0.28.2 contained a Cypher injection vulnerability in shared search-filter construction for non-Kuzu backends. The advisory was published on March 11, 2026, updated on March 12, and the NVD entry was published within the same reporting window, making March 12 the best date for when the issue became broadly visible to the security community. (GitHub)
The technical issue centered on attacker-controlled label values supplied through SearchFilters.node_labels, which were concatenated directly into Cypher label expressions without validation. The advisory further notes that the issue was not limited to direct use of Graphiti alone. In MCP deployments, an attacker could potentially exploit the flaw through prompt injection against an LLM client if that client could be induced to call search_nodes with attacker-controlled entity_types values. The fixed version is 0.28.2. This puts the incident squarely in the category of agent and MCP-adjacent infrastructure risk, where unsafe lower-layer query handling can become reachable through higher-level model interactions. For PointGuard context, this is exactly the sort of agentic exposure discussed in PointGuard AI’s agentic AI security overview and its analysis of the MCP security crisis.
What Could Happen
This issue is best understood as a query injection flaw in an AI memory and orchestration component. The direct technical failure was classic unsanitized input handling. Graphiti joined attacker-controlled label values and inserted them into Cypher expressions without proper validation or parameterization. In traditional application security terms, that is an injection weakness. In AI security terms, the risk becomes more serious because the vulnerable path can sit behind an agent or model workflow rather than a normal user-facing form.)
That AI-specific reachability is what makes the incident notable. The GitHub advisory explicitly says the flaw could be exploited in MCP deployments not only by direct untrusted access to a Graphiti MCP server, but also through prompt injection against an LLM client. In other words, an attacker may not need direct database access if they can influence the model’s tool use and get it to invoke the vulnerable function with malicious values. If successful, the outcome could include arbitrary Cypher execution, unauthorized reads, tampering, deletion, and potential bypass of logical isolation around graph data. This is a strong example of how agent autonomy, tool invocation, and context-layer complexity can turn a familiar software weakness into a distinctly AI-shaped attack path. PointGuard AI’s AI detection and runtime guardrails approach is designed around these interaction-layer risks, where model prompts, tools, and downstream systems all need active policy enforcement.
Why It Matters
CVE-2026-32247 matters because it affects a component that can influence how AI agents remember, retrieve, and reason over context. When a flaw appears in that layer, the risk is not limited to a single bad output. It can impact the integrity and confidentiality of the data the agent uses to make decisions. If a graph-backed memory store can be queried or modified through injection, an attacker may be able to extract sensitive information, corrupt context, or shape future model behavior by altering the data the agent relies on.
The incident also reflects a broader shift in AI risk. Security teams are no longer only protecting models and prompts. They are now defending orchestration layers, retrieval components, memory systems, MCP services, and the glue code that connects them. The Graphiti issue shows how a conventional software vulnerability can become materially more dangerous when paired with agent workflows and prompt injection opportunities. That matters for governance as well as security. Organizations trying to align with frameworks such as NIST AI RMF increasingly need controls that account for third-party agent frameworks and their runtime behavior, not just model selection or data provenance. PointGuard AI has been emphasizing this pattern in its AI Security Incident Tracker and in its coverage of MCP and agent security risks.
PointGuard AI Perspective
Graphiti’s vulnerability illustrates why AI security cannot stop at the model boundary. Modern agentic systems depend on a chain of components that includes memory layers, retrieval pipelines, MCP servers, orchestration logic, external tools, and shared data stores. A flaw in any one of those layers can become reachable through prompt injection, unsafe tool invocation, or over-permissive runtime behavior. That is exactly why PointGuard AI approaches AI security as an end-to-end control problem rather than a narrow model-scanning exercise. PointGuard AI’s agentic AI security platform is built to give teams visibility into agents, MCP services, tools, and connected data paths, so they can understand what is exposed before an attacker does. (pointguardai.com)
For incidents like this one, PointGuard AI helps in several ways. Discovery and inventorying can identify agent frameworks, graph-backed memory services, and MCP-connected components that may otherwise operate outside normal security review. Runtime guardrails can inspect tool calls and interaction flows for prompt injection patterns, unsafe parameter use, and policy violations before they reach sensitive downstream systems. Policy enforcement can reduce blast radius by limiting what agents are allowed to query, modify, or execute across memory stores and external tools. PointGuard AI also helps organizations apply governance consistently across AI applications and agentic workflows, which is increasingly important as third-party frameworks introduce hidden supply chain risk. As AI systems become more autonomous, trustworthy adoption will depend on proactive controls that assume failure paths will emerge somewhere in the chain and stop them before they turn into real incidents. For more PointGuard context, see the AI security governance overview and the company’s recent discussion of the MCP security crisis. (pointguardai.com)
Incident Scorecard Details
Total AISSI Score: 6.7/10
Criticality = 7
The affected component sits near agent memory and context retrieval, which can expose sensitive internal data and influence agent behavior. AISSI weighting: 25%.
Propagation = 8
The vulnerability can extend beyond a single local instance because it may be reached through MCP-style deployments and LLM tool invocation paths, creating realistic cross-component spread in agentic workflows. AISSI weighting: 20%.
Exploitability = 4
Public disclosure and a documented exploit path exist, but I did not find evidence of confirmed in-the-wild exploitation at the time of reporting. AISSI weighting: 15%.
Supply Chain = 8
This risk originates in a third-party framework used to support AI agents and graph-backed memory, which can be deeply embedded in downstream deployments with limited direct visibility. AISSI weighting: 15%.
Business Impact = 6
No confirmed exploitation was reported in the sources I reviewed, but the flaw presents credible risk of unauthorized access, tampering, or deletion in production agent workflows. AISSI weighting: 25%.
Sources
GitHub Advisory Database: CVE-2026-32247
https://github.com/advisories/GHSA-gg5m-55jj-8m5g
(GitHub)
NIST National Vulnerability Database: CVE-2026-32247
https://nvd.nist.gov/vuln/detail/CVE-2026-32247
(NVD)
Graphiti Security Advisory
https://github.com/getzep/graphiti/security/advisories/GHSA-gg5m-55jj-8m5g
(GitHub)
