CPR’s Joint CPR Council & Arbitration Committee Meeting: AI in ADR

CPR Speaks,

 By Caterina Cesario and Michael Yan

On June 28, the CPR Arbitration Committee and the CPR Council held an in-person meeting at the New York office of Milbank LLP, marking the committee’s first in-person gathering since the onset of the Covid-19 pandemic. The focus of the meeting was a hot topic: artificial intelligence in alternative dispute resolution.

Led by moderator Viren Mascarenhas, a New York-based litigation and arbitration partner at Milbank, the session examined the intersection of AI and ChatGPT in the legal profession, with a particular emphasis on AI’s role in legal search engines and the potential opportunities and risks associated with these innovative tools.

The panel consisted of experts from diverse backgrounds, all deeply engaged in artificial intelligence, and covered a wide range of perspectives. Kicking off the conversation was Isabel Yishu Yang, the founder of ArbiLex, a Cambridge, Mass., AI-based litigation funding consulting firm and Monica Crespo, Paris-based Head of Product at Jus Mundi, which provides online international law and arbitration research tools and services. They were soon joined by Shaun Sethna, general counsel and “Head of People” at The Suite Inc., which operates TechGC, a membership-only support organization of general counsels, and Jorge Mattamouros, a New York-based partner in White & Case.

Since its launch by research laboratory OpenAI on Nov. 30, ChatGPT has undeniably had a profound impact on people's lives. Isabel Yang highlighted the innovative aspects of this “generative” AI tool compared to other models. In particular, she drew a comparison to supervised models, which rely heavily on human input and, although they offer greater explainability and transparency in terms of their output, provide fewer automation benefits.

On the other hand, explained Yang, unsupervised models, after being fed a substantial amount of data, can learn and identify useful patterns for classification and prediction of answers but lack the ability to elucidate their decision-making process.

She then emphasized that ChatGPT, as a generative AI tool, represents a significant advancement beyond unsupervised models. While it can effectively use vast amounts of data unsupervised, what sets it apart is the ability for direct engagement and interaction through natural language. This allows users to provide feedback and input to ChatGPT, enhancing its capabilities and making it more adaptable and responsive to human needs.

During the discussion, Monica Crespo shared insights from her experience at Jus Mundi, where AI tools, particularly supervised models, have been employed continually since the early stages. She specifically highlighted the opportunities presented by AI in enhancing the quality and efficiency of legal research engines. In this context, she emphasized the transformative impact of Large Language Models, the technology underlying ChatGPT.

Crespo underscored that Large Language Models represent a game changer in the legal field due to their ability to comprehend the contextual nuances of words. This understanding enables them to recognize that the meaning of a word can vary depending on the specific context in which it is used. For instance, they can discern the distinction between the legal interpretation of a word in a civil document versus a criminal document. This contextual awareness and semantic comprehension contribute to AI’s revolutionary potential in legal research, allowing for more accurate and relevant results.

As the discussion progressed, Shaun Sethna elaborated on the three key benefits that technologies like ChatGPT can bring to general counsels. First, such tools enable the swift digestion and comprehension of information, thereby supporting general counsels in making timely and well-informed decisions. Second, these technologies facilitate the effective translation of information into formats tailored to the specific characteristics of the intended audience, streamlining the creation of presentations such as PowerPoint slides, and are highly valuable in supporting the day-to-day tasks of general counsels.

Jorge Mattamouros highlighted three foundational elements that ChatGPT and similar tools have introduced into the legal landscape. First, he explained that these technologies have made it increasingly difficult, if not impossible, to distinguish between legal work products generated by machines and those produced by humans.

Second, while lawyers typically specialize in specific jurisdictions and areas of expertise, ChatGPT can rapidly acquire and provide legal knowledge that is agnostic to jurisdiction and field of practice. This ability surpasses human capacity for knowledge acquisition and dissemination.

Finally, ChatGPT’s introduction is expected to attract new participants to the legal marketplace. This development evokes both excitement and concern, as the impact on the future of lawyering and the market dynamics remains uncertain. One positive aspect, however, is the potential for these tools to enhance access to justice by making legal intelligence and expertise, which are often costly, available to a wider range of individuals and organizations.

Monica Crespo envisions social risk and organizational risk as two main types of risks AI’s application could potentially bring to the legal profession. AI feeds on large amounts of data that inevitably include unwanted social biases and prejudices, leading the AI to produce potentially biased predictive results.

From an organizational perspective, the interaction between AI and humans creates a new form of legal-profession competition. Being able to understand AI’s limits and the nature of the data fed to it is extremely challenging, but also crucial to avoid overreliance or improper reliance on AI.

ArbiLex’s Isabel Yang, on the other hand, noted that the risks do not lie in the AI itself, but in the way humans use it. Referencing the recent case of Mata v. Avianca Inc., No. 22-cv-01461 (S.D.N.Y.) (orders available here) that made headlines when a lawyer relying on ChatGPT for his brief cited nonexistent AI-generated caselaw, Yang pointed out that users should never both rely on the AI for solution and for evaluation of the solution itself. The evaluation of the solution generated by an AI should always be in the hands of the human user, suggested Yang.

On the same topic, Jorge Mattamouros added two additional types of risks he predicted attorneys will confront. First is the leak of confidential information. Lawyers feeding confidential client information into the server of a third-party AI provider inevitably risk leaking confidential information.

Mattamouros viewed this as an existential threat to legal services that needs to be addressed.

The second risk relates to the value of the legal services. The value that large law firms offer to their clients traditionally lies in the quality of human work. How to incorporate AI use into the workflow while ensuring the quality of work is a challenge. Neither training AI in-house nor outsourcing it to a third-party seem like an ideal solution.

When asked what types of risk arbitrators face when relying on AI to digest and summarize large quantity of documents and information presented by the parties, Mattamouros reiterated the risk of leaking confidential information. Loading the parties’ confidential case information into a third-party AI server threatens the private nature of arbitration, he noted. He also added that risks often stem from suspicion associated with the human user not understanding the AI mechanism and the inability to evaluate the quality and efficacy of its output.

During a Q&A after the individual presentations, the panelists were asked to share their prediction on how AI would change the legal field.

Isabel Yang said that the repetitive and mundane work of junior lawyers likely will be replaced by AI. This will have an impact that is hard to predict on the current hourly business model of legal services. But she said believed that the core value of attorneys in understanding and applying the law likely won’t ever be replaced by AI.

Monica Crespo predicted lawyers’ work to be more quantitative, rather than qualitative. with the increasing application of AI.

Shaun Sethna saw the assignment of responsibility as a potential challenge. Similar with autonomous driving, where the liability of the driver versus the liability of the car company is under debate, when lawyers become users of AI, the responsibility of lawyers versus the responsibility of AI providers is a  potential dilemma.

Finally, Jorge Mattamouros predicted AI models will serve two functions in the legal profession’s future, with some specializing as databases, while others function as reasoning tools. Either way, he said he believed AI models will eventually branch into industry-based specialization.

* * *

The authors are 2023 CPR Institute summer interns. Cesario graduated last month with an LLM from Yeshiva University’s Benjamin N. Cardozo School of Law in New York. Yan is a student at Boston College Law School in Newton Centre, Mass. 

[END]