Insights from Legalweek: How Legal Teams Are Taking Up the Mantle of AI Governance

Author: LDI Team

March 30, 2026

LDI had a standout presence at Legalweek 2026. Indeed, it was the first time in the event’s history that Legal Data Intelligence was a dedicated track.

As part of the new track, LDI founding member Bobby Malhotra, a partner at Wilson & Strawn, moderated a discussion titled “Leveraging Legal Data Intelligence to Streamline AI Governance: Practical Strategies for Legal Teams.” Panelists Daniel Lim, assistant general counsel at CrowdStrike; Deepa Kairen, associate general counsel at GEICO; Jerry Bui, senior vice president of digital forensics at Purpose Legal; and LDI Architect Odette Claridge, corporate counsel for privacy and governance at ProSearch, shared perspectives on the emerging practice of AI governance, the role legal professionals can play in advancing it, and strategies to streamline governance.

Here are some key takeaways from the session:

Why AI governance is becoming a critical focus for organizations

Malhotra, a lead partner in his firm’s AI practice, opened the discussion by noting how AI governance has quickly become a widespread priority with the rise of large language models and generative AI tools.

“Today, AI governance professionals are some of the busiest people in the industry.”

Kairen highlighted factors driving corporate investment in AI governance.

“AI is really scaling at a level that’s unprecedented. You have low-code, vibe coding, and agentic AI putting development in the hands of people who are not software developers or data scientists. We need streamlined processes to evaluate AI systems to keep up with that pace of change. That may involve ensuring you have the right inventory and risk assessment upfront to determine what’s low-, medium- and high-risk.”

Claridge, who chairs ProSearch’s AI governance committee, drew a connection between the growing need for governance, emerging regulations, and expanding capabilities as foundation models mature.

“AI tools are continuing to expand their offerings. You need to continually conduct risk assessments and develop a deeper understanding of how these tools are used across environments and conditions. For example, an organization may operate in multiple regulated areas, and those considerations need to be part of the calculus.”

Beyond semantics: AI governance vs. AI data governance

While the terms may sound similar, panelists emphasized key distinctions. Although often used interchangeably in the legal industry, they represent different concepts, according to Bui.

“They are related, but I think of them as two different layers in the governance model,” he said.

Bui described AI governance as the process of approving and overseeing AI systems, while AI data governance focuses on the data flowing through those systems.

“At this layer, you ask questions such as: Where did the data originate? How reliable is it? This is your internal organizational data, not the data used to train the models.”

He added that AI data governance also involves accountability in how systems are used.

“I’m in digital forensics, so I think about how a user might use the system in unexpected ways to commit fraud or misconduct. That’s extreme behavior you may not have accounted for when designing the system. But if you’re not thinking about those extremes, you’re not stress-testing the system enough.”

Malhotra offered a useful schema for understanding the relationship between the two.

“AI governance is a higher-level concept. It’s the vision for AI in your organization and how you enforce it. AI data governance is a subset of that strategy.”

Why organizations look to legal teams for AI governance

“AI governance is a framework of policies, processes, and controls that help companies identify and mitigate AI risks,” Kairen said. “Generative AI is evolving rapidly. It learns from new data and takes a probabilistic approach to generating content. So, it’s not enough to conduct periodic reviews — you need governance throughout the entire lifecycle.”

Kairen said in-house legal teams are well positioned to lead these efforts because they regularly work across functions to identify and manage risk.

“We are in a good position to help teams understand AI risks and build frameworks to manage them,” she said.

Lim added that legal teams should be at the forefront of AI adoption.

“In the era of large language models, lawyers are all about words. If anyone should be at the tip of the spear when it comes to incorporating AI workflows effectively, it should be legal.”

The importance of clear language and definitions in AI governance

In response to a question about guardrails for external AI systems, Malhotra underscored the ambiguity embedded in vendor marketing terminology. “Terms like ‘secure’ or ‘closed’ models are often used without a shared or precise definition,” noted Malhotra.

“Effective risk management requires looking past the labels to understand where data actually flows, how it is handled within the system’s underlying architecture, and what security workflows are truly in place—then reinforcing those protections contractually to preserve confidentiality and ownership rights.”

He added that organizations must carefully define key concepts in agreements.

“There are many factors to consider — from indemnification and IP rights to defining your data so it includes derivative data.”

Lim echoed the need for clarity.

“It’s important to define in plain terms what you’re talking about. What’s concerning is that sometimes neither party fully understands what a clause means.”

He noted that while AI introduces new risks, the legal approach is familiar.

“For lawyers, this is similar to what we’ve always done. AI poses higher risks, but it’s not entirely different from e-discovery, data privacy, and information governance. It comes down to defining terms clearly so you can effectively manage risk.”

Get the LDI newsletter

Bask in the SUN with the Legal Data Intelligence newsletter.

Sign Up