The "USB-C port for AI" has arrived, but it turns out the port might be a bit loose.
Last week, the Model Context Protocol (MCP) the shiny new standard for connecting AI agents to your data and tools, hit a major speed bump. Vulnerabilities in reference implementations from Anthropic and Microsoft have exposed a uncomfortable truth: we are currently building the "Agentic Era" on a shaky security foundation.
From remote code execution (RCE) in Git servers to massive SSRF risks that expose cloud credentials, the message is clear: Prompt security isn't enough. We need to talk about the "Iceberg Problem."
The Iceberg Problem: Beyond the Prompt
Most AI security today focuses on the prompt layer, trying to stop the model from saying something "naughty." But as BlueRock Security researchers recently pointed out, the real risk is the execution layer.
When an agent uses an MCP server to fetch a file or run a command, it’s acting as a "privileged deputy." If that deputy doesn't have strict boundaries, a simple prompt injection can turn into a full-scale system compromise.
The Recent Hits:
- Anthropic Git MCP (CVE-2025-68143, -145, -144): Researchers found that the git_init tool could be tricked into creating repositories in sensitive directories like ~/.ssh. Combined with other flaws, an attacker could exfiltrate your private keys just by sending a clever message to your AI assistant.
- Microsoft MarkItDown MCP: This tool, designed to convert files to Markdown, was found to have no URI restrictions. Attackers could use it for Server-Side Request Forgery (SSRF), pointing the agent at AWS metadata endpoints to steal cloud credentials. A scan of 7,000+ MCP servers found that 36.7% shared this exact flaw.
The 7-Step Defense-in-Depth for MCP Security
To move from "cool demo" to "enterprise-ready," we need a layered defense. Here is the 7-step blueprint for securing Agentic AI and MCP environments:
1. Discovery
You can’t secure what you can't see. "Shadow MCP" is the new Shadow IT. Organizations need to continuously scan their environments to inventory every MCP server, agent process, and third-party connector in use.
2. Authentication: Verifying Every Connection
Trusting an agent simply because it’s internal is a recipe for disaster. PointGuard enforces rigorous identity verification to ensure that only legitimate entities can interact with your MCP servers. We provide native support for the industry-standard protocols required to secure the agentic supply chain:
- Basic Auth: For quick, secure setup in controlled environments.
- JWT (JSON Web Tokens): For stateless, scalable, and secure identity propagation across distributed agent architectures.
- OAuth 2.0: For robust, enterprise-grade delegated authorization, allowing agents to act on behalf of users without ever exposing primary credentials.
3. Authorization: Granular Access Control
Authentication tells you who is calling; Authorization defines what they are allowed to touch. PointGuard acts as the central policy engine, enabling precise access control across the entire execution chain. We don't just secure the perimeter; we secure the interactions:
- Agent to MCP: Ensure only specific "Finance Agents" can talk to "Banking MCPs."
- MCP to Tool: Restrict an MCP server so it can only trigger approved tools (e.g., allowing read_file while strictly blocking delete_repo).
- Environment Scoping: Limit tool execution to specific environments or data silos, preventing an exploit in a dev tool from jumping to production data.
4. Adaptive Guardrails
Traditional static filters fail against the "infinite" ways an agent might be manipulated. Adaptive guardrails use secondary LLMs or specialized "Model Armor" to inspect intent in real-time—blocking prompt injections and “indirect or stored” prompt injections before they reach the tool-calling phase.
5. AI-DLP (Data Loss Prevention)
What happens when an agent successfully reads a file but it contains your customer's PII? AI-DLP scans the output of MCP tool calls, redacting or masking sensitive data before it ever hits the chat history or the model's context window.
6. Observability-based Threat Detection
We need to monitor behavior, not just logs. If an MCP server suddenly starts making outbound requests to an internal metadata IP (like 169.254.169.254), observability tools should flag this as an SSRF attempt and kill the session instantly.
7. Open Source MCP Risk KB
With over 13,000 MCP servers now on GitHub, the community needs a centralized Risk Knowledge Base. This should include dynamic risk scoring, known CVEs, and community-reported "weak" implementations to help developers choose safe building blocks.
Secure Your Agentic Future with PointGuard AI
Building this 7-step stack from scratch is a massive undertaking. That’s where the PointGuard AI Security Platform comes in.
The "Iceberg Problem" of AI security isn't going away, but it can be managed. The PointGuard AI Security Platform provides these critical capabilities—from OAuth 2.0 authentication and granular tool authorization to adaptive guardrails and observability —as a unified solution for Agentic AI, MCPs, and LLM-based systems.
By acting as a secure gateway, PointGuard ensures that as your AI agents become more capable, they don't also become more dangerous.
Whether you're building custom agents or deploying third-party MCP servers, PointGuard ensures your AI stays productive—not predatory.





