As organizations race to adopt generative AI, the security of AI systems—not just the use of AI for security—is emerging as a pressing priority. At this week’s Gartner Security & Risk Management Summit, three influential voices laid out converging perspectives on where the industry stands in securing AI applications, models, and agentic systems: Dennis Xu (analyst at Gartner), Hammad Rajjoub (director at Microsoft), and Anton Chuvakin (senior security advisor for Google Cloud).
While each speaker brought a unique lens, common themes surfaced: data loss, prompt injection, hallucination, agent risk, and the foundational need for identity, governance, and trust in AI systems. This blog organizes their insights by topic, emphasizing the shared consensus and nuance.
From AI for Security to Security of AI
A crucial distinction ran through the discussions: AI as a tool for cybersecurity (e.g., using AI agents in SOCs), versus securing the AI systems themselves. The latter includes safeguarding models, training data, agent actions, and enterprise workflows—this is the domain where new risks emerge and where PointGuard AI focuses.
As Rajjoub observed, “AI agents will become the augmentation of our workforces… [but] what does security look like for those agents?” His message was clear: while AI can be used to improve defense, AI itself introduces new attack surfaces that must be secured independently.
The Speed Factor: Dramatically Faster Adoption than Cloud, or SaaS
One of the most striking warnings came from Anton Chuvakin, who emphasized that AI’s pace of adoption is unlike anything we’ve seen before.
Drawing a comparison to past tech disruptions, he noted: “The speed of adoption is not what it was for cloud, for SaaS… This is all going to be done faster. If you ban [AI], you get bypassed. If you move slowly in the right direction, you get bypassed anyway”.
Chuvakin connected today’s AI governance dilemmas to familiar pain points from the era of Shadow IT: “It may be consumer-grade tools… or business AI purchased without central control… but it still ruins governance.” He referred to this as Shadow AI type two, where sanctioned use still becomes risky due to a lack of visibility, speed, and oversight.
His conclusion: the only viable approach is streamlined governance and guided evolution. This means steering users to safer tools rather than blocking them—and doing so at a pace that matches AI's real-time, multi-modal nature. “People obsessing over prompt injection sometimes get hit by SQL injection. Security teams can’t afford to treat AI risks as isolated. They're part of a larger, rapidly shifting attack surface”.
Prompt Injection: The Persistent Risk
All three experts agreed that prompt injection remains one of the most intractable threats to LLM-based systems and agent workflows.
Dennis Xu was blunt: “It is impossible to block prompt injection 100% of the time… We need to change our mindset”. He cited techniques like guardrails and input/output filtering but emphasized their limitations against more sophisticated strategies such as gradient-based prompt generation.
Anton Chuvakin echoed this view, calling prompt injection “the problem that shows up only all of the time.” He dismissed the hope of a one-click solution: “There is no magic firewall that fixes prompt injection”.
Data Exposure and Loss
Another focal point was data leakage—through memory persistence, fine-tuning pipelines, or misconfigured retrieval systems.
Xu highlighted a now-infamous case where users saw other users’ conversation history in a chat app. “This is a memory management problem… not just specific to OpenAI. It could happen anywhere”. He also warned about insecure file uploads in custom copilots or GPTs, citing a prompt injection attack conducted entirely in German to bypass English-only filters.
Rajjoub emphasized intent enforcement to prevent data misuse: “If an agent is summarizing an email with a hidden prompt in white-on-white text… it breaks the agent’s intent”. Chuvakin added that even enterprise AI tools often suffer from shadow AI deployments and insufficient visibility: “The speed of adoption is so fast that governance can’t keep up”.
Hallucination and Toxic Output
AI-generated content that is factually incorrect (hallucinations) or culturally inappropriate (toxic output) poses reputational and operational risks. Xu categorized hallucination as “a feature, not a bug” due to the inherent nature of predictive language modeling.
Chuvakin, meanwhile, warned against treating these risks as purely academic. “Sometimes we are stuck with them because nobody else will take them… they land in the CISO’s inbox”.
Detection and mitigation strategies mentioned include RAG (retrieval-augmented generation), user validation, temperature tuning, and output moderation. But as Xu pointed out, “the more mitigation strategies there are, the more it suggests none of them fully work”.
Agent Autonomy and Execution Risk
All three speakers raised the alarm about autonomous agent risk, where AI systems take actions independently across systems or domains.
Xu advised attendees to focus on what the agent can do, not just what it sees: “Only take low-risk actions… because once the agent can take action, it can cause real damage”. He described vendor behaviors like OpenAI blocking stock trading and requiring human confirmation for shopping as necessary examples of risk gating.
Rajjoub emphasized the need for subagents to monitor agents, asking “Who guards the guards?” He pushed for every agent to be “governable, verifiable, auditable” and uniquely identified. Chuvakin added that cross-domain agents (e.g., personal assistants also controlling enterprise workflows) pose a particular concern: “I don’t want my online research agent opening my garage door”.
Governance: Fast, Dynamic, and Embedded
Governance was not treated as a compliance afterthought but as a real-time necessity. Chuvakin framed the challenge as “governance at the speed of AI”, warning that conventional models are already obsolete. “You can’t rely on static policy review boards anymore. You’ll be overrun by users deploying AI before the next meeting even starts”.
Xu and Rajjoub emphasized policy enforcement at the agent level, including actions, intent, and inter-agent communication. Rajjoub described a multi-layered framework of trust, access control, and secure-by-default principles.
Here's the revised conclusion section for your blog post, reflecting the accurate capabilities of PointGuard AI:
PointGuard AI: Securing What AI Builds
The experts’ insights align strongly with PointGuard AI’s philosophy: security must be native to AI applications—not bolted on. While others build AI for cybersecurity, we secure the AI itself.
PointGuard AI addresses these risks through:
- Comprehensive AI Discovery: We provide deep visibility into AI assets across your enterprise, including models, pipelines, and increasingly, agents. This includes tracking usage, ownership, and risk posture, even in environments where shadow AI is prevalent.
- Prompt Injection Mitigation: We use dynamic red teaming, contextual input shaping, and structured testing to detect and help neutralize prompt injection risks before they impact production systems.
- Potential Data Leak Detection: While we don’t enforce data access boundaries, our platform continuously monitors AI model behavior for signs of sensitive information leakage, including inadvertent exposure during RAG operations or fine-tuned model responses.
- AI Security Posture Management (AI-SPM): Our platform gives teams a unified control plane to assess and manage AI-specific risks—spanning governance gaps, insecure development pipelines, model exposure, and misconfigured access policies.
- Integration with MLOps Platforms: We integrate with platforms like Databricks to provide continuous risk assessment and policy enforcement throughout the model lifecycle—from training to deployment to retirement.
- AI Automated Red Teaming: We simulate real-world attack scenarios—like prompt injection, data leakage, and adversarial manipulation—against your deployed AI systems to test resilience and surface vulnerabilities.
- AI Runtime Detection and Response: Our platform monitors production AI behavior in real time, correlating signals from LLM outputs, agent actions, and user context to detect threats and initiate automated responses.
As enterprises embrace AI agents and autonomous systems, PointGuard AI empowers security teams to manage these risks with the same rigor and speed that the technology demands.