Generative AI is changing the legal landscape. The benefits of leveraging AI in legal work seem promising. The market is growing. More clients want lawyers to use AI tools, and, importantly, more lawyers are becoming open to using them. A recent survey found that 82% of lawyers think AI can be used for legal work, with 51% thinking AI should be used. For many, using AI is becoming less of an option and more of an inevitability. Daniel Tobey, chair of DLA Piper’s AI practice, has said, “This is an arms race, and you don’t want to be the last law firm with these tools.” However, some see this “arms race” in a more negative light, pointing to an increasing number of issues associated with the adoption of AI, including the highly publicized issue of AI hallucinations. As firms of all sizes evaluate the benefits of leveraging AI, they must also consider necessary safeguards that mitigate risks.
One common application of AI that firms of all sizes could benefit from is the automation of burnout-inducing “low-level” work, which many lawyers feel like they’re drowning in. It was recently reported that 67% of in-house lawyers feel “buried in low-value work.” In addition, lawyers at 33% of large firms (more than 1,000 employees) spend nearly one out of every three hours on these ‘low‑value’ tasks. Being buried with low-level tasks isn’t unique to the legal field, nor is using AI to automate such tasks. AI-enabled productivity tools are widespread, cheap, and easy to adopt, and as a result, they can be an attractive option for firms of all sizes.
AI and Firm Size
Many small and medium-sized law firms feel uniquely pressured to incorporate AI tools to level the playing field with larger firms “that can just throw people at the problem.” Nevertheless, the flexibility of smaller firms is an advantage: “Where larger firms may wait on committees or consultants to approve an AI approach, smaller firms can move quickly to take advantage of the capabilities.” This flexibility may enable small firms to take advantage of a variety of different tools with distinct strengths. For example, small firms could use AI to create complex custom training programs for new associates, or to handle the demands of large cases that may have been unmanageable before. Also, if a firm has limited resources, AI can help lawyers who want to think outside the box and hear different perspectives on arguments.
On the other hand, large law firms are also hoping to gain from leveraging AI. Large firms also have external pressures to adopt AI use, including from clients. As Zach Warren, who leads Technology and Innovation Content for the Thomson Reuters Institute, explains, “Small companies may appreciate you using generative AI—But large companies expect it.” Large firms may have the resources to handle barriers like cost, data protection, and security, allowing them to deploy more powerful AI programs across more areas, including their non-legal departments. These large programs may help standardize work products across hundreds of employees, a task that is otherwise difficult to manage. Overall, large firms have more processes that AI programs could improve or automate, and they have the resources to create and manage those programs.
While these benefits are exciting, any firm leveraging AI must address potential downsides, like AI hallucinations. Hallucinations refer to an AI output that may sound plausible but is entirely made up. Hallucinations are a result of the way that generative AI works. Large Language Models (“LLMs”), like ChatGPT, craft sentences by predicting the most probable next word. In the legal context, this can lead to made-up cases and opinions. Given the importance of accuracy and precision in the legal profession, hallucinations pose a significant problem for any firm wanting to leverage AI. Hallucinations are unlikely to go away any time soon, with experts disagreeing about whether the “issue” of AI hallucinations can ever be solved. In the meantime, legal professionals need to focus on mitigating the risks that they pose.
The first precaution firms may take is choosing the right AI tool to use in the first place. Many steps to avoid hallucinations can be taken when the AI model is being created. Using a narrowly tailored model that was trained on high-quality data can limit hallucinations. For example, one challenge of using ChatGPT for legal work is that it likely wasn’t trained on all the cases in a paywalled legal database like Westlaw. This explains why ChatGPT can intelligently discuss a major Supreme Court case like Brown v. Board of Education, where there is a lot of literature on the open internet that can be scraped, but may hallucinate the aspects of lesser-known cases. A narrowly tailored tool could also prove to be more effective. Similar to how LLMs can be designed to filter out hate speech, an AI tool explicitly tailored for legal work could possibly include a filter that flags any cited cases that are not ‘real’ as previously identified by humans, making it easier to spot hallucinations.
While many are excited about the potential benefits of a large, custom AI program, law firms are currently opting for other commercially developed programs. Options like Casetext have emerged, which several prominent law firms have already adopted, and are showing promising results. Other AI tools are being built into pre-existing programs from trusted technology providers. For example, Westlaw Precision has already incorporated AI, and LexisNexis has launched Lexis+ AI. These options are designed in a way that incorporates the unique needs of legal work, possibly limiting the risks associated with using public, open-source tools.
Another key precaution against hallucinations is human oversight. This is why human oversight methods, like firm guidelines and user training, are also important. According to IBM, human oversight will always be the final backstop when preventing issues like hallucinations so that humans can apply their subject matter expertise to account for the nuances of each case.
Firms of all sizes can benefit from leveraging AI to automate burnout-inducing low-level tasks. However, when firms use AI for more complex legal tasks, they must carefully decide which tools to use and how human oversight will be implemented to mitigate risks like hallucinations. Small firms have the advantage of being able to adopt new tools more quickly and flexibly. Large law firms will have higher financial and human oversight costs but, as a result, will also be able to maintain powerful, custom-tailored AI programs that small firms wouldn’t be able to support because of their greater resources. Use of generative AI will likely look different from firm to firm, and will evolve as the technology continues to develop.