AppSOC is now PointGuard AI

Securing the Future of AI: Databricks and PointGuard AI Lead the Way with DASF

Preview of the Databricks AI+Data Summit and interview with Arun Pamulapati

Securing the Future of AI: Databricks and PointGuard AI Lead the Way with DASF

With the Databricks AI+Data Summit (DAIS) right around the corner, we wanted to highlight our partnership with Databricks and revisit an in-depth interview we recorded with Arun Pamulapati, Senior Security Field Engineer at Databricks. Arun has played a central role in developing and rolling out the Databricks AI Security Framework (DASF) and will be speaking about it at the Summit. As a featured Databricks partner, PointGuard AI has embraced DASF and embedded its security controls directly into our platform.

If you’re attending DAIS, don’t miss these sessions featuring Arun and our own CPO, Mali Gorantla:

Here is a summary of the full interview. You can also view excerpts here.

The New AI Security Landscape

Enterprises are no longer experimenting with AI—they’re operationalizing it. “AI has gone from a science project to a core part of the enterprise software stack,” said Pamulapati. Unlike previous technology shifts like cloud or SaaS adoption, the move to AI introduces unfamiliar risks and actors. Traditional security teams now find themselves engaging with data scientists, ML engineers, and governance officers—roles that weren’t previously on their radar.

Pamulapati explains that while security teams are used to adapting—having already managed transitions like on-prem to cloud—they now face new questions: What constitutes a model? Who owns it? How do we evaluate its behavior? How do we govern its use across departments?

Why DASF Matters

To help security and AI teams answer these questions, Databricks introduced the Databricks AI Security Framework (DASF). It’s a comprehensive map that outlines all the components of an AI system—not just the model itself—and applies a robust set of controls to each.

“We’ve spoken to over 200 CISOs to understand their needs,” Pamulapati shared. “We found a real gap—not just in tools, but in understanding. Security teams needed help identifying what risks exist across AI pipelines and how to mitigate them using practical controls.”

DASF helps bridge this gap. It builds on existing standards like OWASP Top 10 for LLMs, MITRE ATLAS, HITRUST, and the EU AI Act, aligning AI-specific risks with the kinds of controls security professionals are already familiar with. DASF isn't limited to Databricks customers—it’s designed to be an open framework for any organization adopting AI, regardless of their stack.

Model Sprawl, Shadow AI, and the Supply Chain Challenge

One of the biggest challenges enterprises face is model sprawl. As teams download pre-trained models from sources like Hugging Face, they often skip essential vetting steps. These models might contain vulnerabilities, backdoors, or malware—posing a direct threat to enterprise data.

Databricks’ solution includes tools like Unity Catalog and MLflow, which offer structured governance for AI assets. “If customers use our recommended architecture, they can track which models are in use, who owns them, and what data they're accessing,” said Pamulapati. “But shadow AI is real. There’s a lot of experimentation happening outside formal channels.”

That’s where frameworks like DASF and partners like PointGuard AI come in.

PointGuard AI: Operationalizing DASF for the Enterprise

PointGuard AI is among the first security platforms to fully integrate DASF, enabling customers to automate model scanning, policy enforcement, and continuous compliance. “We’re proud to work with Databricks to turn their vision into real-world security operations,” said Leichter.

PointGuard’s platform discovers hidden models, scans them for supply chain risks, monitors runtime behavior, and enforces data access controls—before, during, and after deployment. These capabilities align tightly with DASF’s control recommendations, helping customers stay secure without slowing innovation.

“Security isn’t a one-time scan,” Pamulapati emphasized. “It’s an ongoing process. New attack vectors are discovered all the time. That’s why we need partners like PointGuard, who bring automation and deep security expertise into the AI lifecycle.”

ML SecOps: The Next Evolution of DevSecOps

As AI becomes embedded in enterprise apps, a new discipline is emerging: ML SecOps. It extends DevSecOps practices to the unique challenges of AI, including model ingestion, data labeling, training pipelines, and model serving.

Databricks advocates for a security-first approach to every stage of the ML lifecycle. This includes scanning third-party models before use, setting access controls via Unity Catalog, enforcing authentication on model endpoints, and monitoring inference logs for anomalies or data leakage.

“There are a lot of parallels to the early days of SaaS and cloud,” Pamulapati noted. “We learned from those journeys, and now we’re applying that knowledge to AI. ML SecOps is how we’ll make this transition smoother and safer.”

Readiness for Regulation

Global AI regulations are coming fast—from the EU AI Act to industry-specific mandates. DASF 2.0 is being expanded to map directly to many of these emerging standards, helping customers prepare in advance.

“Well-written regulations are principle-based, not technology-specific,” said Leichter. “DASF gives companies a head start on compliance, by aligning their practices with frameworks that regulators already trust.”

Getting Started

Databricks has made the DASF white paper publicly available, and a more detailed compendium is coming soon. Organizations can start by understanding the risks across their AI systems and aligning internal stakeholders—including data scientists, legal teams, and business leaders—around a shared security posture.

As Arun Pamulapati summed it up: “AI is an enterprise capability now. And with the right frameworks, tools, and partnerships, we can secure it like one.”