Evaluating Legal Technology: How LDI Practitioners Cut Through the Chaos
Author: LDI Team
One of the goals of the Legal Data Intelligence model is to highlight the role that technologies can play in making legal workflows more efficient and less risky, enabling legal professionals to arrive at insights and advice with greater accuracy, confidence, and timeliness.
This raises important questions: At a time when legal teams are scrambling to adopt generative AI, how do legal professionals select the right technology for their use case? Indeed, what are the key considerations in the decision-making process? And how do early adopters ensure that their buying decisions don’t stem from fear of missing out, but are instead rooted in enhancing productivity and solving meaningful problems?
LDI founding member Adam Rouse, senior counsel, eDiscovery and director of legal operations at Walgreens, moderated a panel discussion at Legalweek titled “Cutting Through the Chaos: A Practical Framework for Evaluating Legal Tech,” which offered instructive takeaways and best practices for evaluating new technologies, selecting the right stakeholders, gathering feedback, and building consensus across teams.
The panel included founding member Ashley Christakis, senior manager of eDiscovery and legal operations at CrowdStrike; George Phillips, director of development and technology at Morgan Lewis; John Koss, head of innovation, AI, and e-data consulting at Mintz; and Major Baisden, CEO of Lineal.
Below are key takeaways from the discussion:
Doing the Work Upfront Goes a Long Way
When Rouse asked the panel how they ensure a technology project has optimal engagement, Christakis underscored the need for requirements gathering and documenting stakeholder buy-in at the outset of the project.
“People don’t spend enough time on requirements gathering. It sounds basic, but there are a lot of moving pieces. It’s really important to engage stakeholders and end users. You may need to refine those requirements as you move through the proof of concept, but once you’ve done that upfront work, you can ensure people stick to the plan and don’t go rogue because the requirements and success metrics are firmly in place,” she said.
A Healthy Aversion to Chasing the “Shiny New Thing”
Koss noted the preponderance of legal technology solutions in the market that were trying to package themselves as generative AI solutions. “The delivery and messaging is all too similar. Everything seems to have a generative AI component or some generative AI feature that promises to empower users to redesign processes and workflows.”
In that din of generative AI promises, it’s useful to look at fundamentals. “Look at the baseline product and ask yourself if it’s something you already have. It’s possible some other product you are evaluating for a different use case might also encapsulate what you are looking for.”
Koss emphasized the need to engage a group of empowered end-users to concretize product requirements. “End users are a great source of information, but they have to be the right kind of end users. They need the credibility to bring others along. Either they are key opinion leaders or the people who use the products the most. It’s vital to get these people on board to evaluate the tool together and decide whether it’s the right choice.”
Phillips underscored the instrumental role played by superusers. “I like to find someone who can get deeply curious about the solution. Not everyone will be equally interested in fixing the problem you are trying to solve with the technology solution. But if you have an invested stakeholder who’s ideally someone close to the problem, it goes a long way.”
He brought up a recent example where a lawyer spent his weekends tinkering with a solution they were evaluating. “He learned how to use it so incredibly well that he helped us understand how to apply the same tool to other problems and use cases.”
Christakis spoke about the importance of clearly outlining the problem that needs to be solved. “From a corporate perspective, we always have to keep InfoSec, finance and procurement in the loop because we are often planning these initiatives a year in advance. You can’t budget for a shiny new tool. The line item needs to reflect the underlying problem the tool is meant to solve,” she said.
Proofs of Concept: Define the Goal So Your Service Provider Can Deliver
It’s essential to gain clarity on the problem a client is trying to solve at the outset, even before embarking on a proof of concept (POC) or pilot.
“Engagements where everything is agreed upon and everyone is on the same page go much more smoothly,” said Baisden, speaking about POCs from the perspective of a legal service provider.
“The point of technology is to make people more productive. If you’re buying a piece of technology, there is something you’re trying to improve—whether it’s making a process faster, increasing billable work or boosting output per hour or per dollar. Something needs to get better as a result of using that technology. So force us to align with that metric,” he said.
Reckoning with a quantifiable success metric, added Baisden, reduces subjectivity in determining whether the POC was successful or not.
Some projects may require helping clients define the problem at a more granular level, according to Phillips. “Clients know it takes 40 clicks to complete a task and want to reduce that to 15. That’s an easy metric to define. But often we need to help them identify where the hangups in their data are,” he said.
Defensibility and Governance: Technology Has Changed, but the Principles and Standards Remain the Same
Koss mentioned a recent case in the Southern District of New York (SDNY) where a defendant used Claude to conduct legal research after receiving a grand jury subpoena. The judge held that the communications between the defendant and the generative AI tool did not meet the criteria for attorney-client privilege.
“If you look at that case, none of what came out is shocking. The rules are essentially the same. Governance is the same. Defensibility is going to have to meet the same standard as before. We do have to think about what these new tools do in terms of their effects on the process. But at the end of the day, the principles and standards are the same.”
“Being at the intersection of data and the law over the last few years, one acquires a good sense of what good privacy, security, and defensibility look and feel like,” argued Koss. “So, there’s no need to reinvent the wheel for generative AI.”
FOMO: When It’s Legitimate and When It’s Not
Against the backdrop of an industry excited about the potential of generative AI, enthusiasm for adopting new technologies can range from reasonable optimism to overblown expectations. Distinguishing between legitimate and misplaced fear of missing out (FOMO) requires nuance.
“If you are a law firm today, it’s unacceptable that you’re not leveraging technologies like generative AI to drive down costs for your clients. If you don’t have a clear story about how you’ve used AI to reduce the cost burden for clients, it’s a bad look. In that situation, FOMO is legitimate,” said Baisden.
“It’s illegitimate when you’re not using it for a purpose—when you adopt it simply because another company is using it. Using AI without understanding why your competitors are using it is a recipe for disappointment,” he said.
The TLDR
Success still hinges on fundamentals: rigorous upfront requirements, clear problem definition and measurable outcomes. Amid a crowded market of AI-labelled tools, panelists cautioned against chasing “shiny” solutions without aligning them to real business needs or empowered end users. Well-structured pilots, grounded in quantifiable metrics, can cut through subjectivity and deliver value. The real challenge is not whether to adopt AI, but how to do so with discipline, purpose, and credible return on investment.
(Editor’s Note: Last year, LDI Architects from the Business of Law category published a comprehensive guide on onboarding new technologies. You can read it here.)