A Primer on Using Artificial Intelligence in the Legal Profession
By Lauri Donahue
Lauri Donahue is a 1986 graduate of Harvard Law School and was one of the co-founders of the Harvard Journal of Law & Technology. She is now the Director of Legal Content for LawGeex, a Tel Aviv legaltech startup.
What's artificial intelligence ("AI") and why should lawyers care about it? On a practical level, lawyers should be aware that software powered by AI already carries out legal tasks. Within a few years, AI will be taking over (or at least affecting) a significant amount of work now done by lawyers. Thirty-nine percent of in-house counsel expect that AI will be commonplace in legal work within ten years.
On a more philosophical level, lawyers should understand that the "decisions" made by AI-powered software will raise significant legal questions, including those of tort liability and of criminal guilt. For example, if AI is controlling a driverless car and someone's killed in an accident, who's at fault?
While the philosophical questions are important to resolve, this Comment will focus on the practical issues. To provide an overview of what AI is and how it will be used in the legal profession, this Comment addresses several questions:
- What is AI?
- How does AI work?
- What can AI do?
- How are lawyers using AI?
- How will AI affect the legal profession?
What is AI?
Let's start with a few definitions:
"Artificial Intelligence" is the term used to describe how computers can perform tasks normally viewed as requiring human intelligence, such as recognizing speech and objects, making decisions based on data, and translating languages. AI mimics certain operations of the human mind.
"Machine learning" is an application of AI in which computers use algorithms (rules) embodied in software to learn from data and adapt with experience.
A "neural network" is a computer that classifies information -- putting things into "buckets" based on their characteristics. The hot-dog identifying app from HBO's Silicon Valley is an example of one application of this technology.
How Does AI Work?
Some AI programs train themselves, through trial and error. For example, using a technique called neuroevolution, researchers at Elon Musk's OpenAI research center set up an algorithm with policies for getting high scores on Atari videogames. Several hundred copies of these rules were created on different computers, with random variations. The computers then "played" the games to learn which policies were most effective and fed those results back into the system. AI can also be used to build better AI. Google is building algorithms that analyze other algorithms, to learn which methods are more successful.
Other AI programs need to be trained by humans feeding them data. The AI then derives patterns and rules from that data. AI programs trained through machine learning are well-suited to solve classification problems. This basically means calculating the probability that certain information is either of type A or type B. For example, determining whether a given bear is a panda or a koala is a classification problem.
The training starts with showing the computer lots of samples of pandas and koalas. These initial samples are called the training set, and clearly identify which type of bear is being presented to the AI.
The AI builds a model--a set of rules--to distinguish between pandas and koalas. That model might be based on things like size, coloring, the shape of the ears, and what the animal eats (bamboo or eucalyptus).
After training, the AI can be tested with new pandas and koalas to see whether it classifies them correctly. If it doesn't do very well, the algorithm may need to be tweaked or the training set may need to be expanded to give the AI more data to crunch.
What Can AI Do?
At this point in its development, AI is good at finding items that meet human-defined criteria and detecting patterns in data. In other words, AI can figure out what makes a panda a panda and what distinguishes it from a koala--which lets it find the pandas in a collection of random bears. These are sometimes called "search-and-find type" tasks.
Once it's identified something, the AI can then apply human-defined rules and take actions. In the case of legal work, an AI can carry out tasks like:
- IF this document is a non-disclosure agreement, THEN send it to the legal department for review
- IF this NDA meets the following criteria, THEN approve it for signature
- FIND all my contracts with automatic renewal clauses and NOTIFY ME four weeks before they renew
- TELL ME which patents in this portfolio will expire in the next six months
According to Stefanie Yuen Thio, joint managing partner and head of corporate at TSMP Law Corp. in Singapore, legal work that's repetitive, requiring minimal professional intervention, or based on a template will become the sole province of software. In addition, she says,
any legal work that depends on collating and analyzing historical data such as past judicial decisions, including legal opinions or evaluating likely litigation outcomes, will become the dominion of AI. No human lawyer stands a chance against the formidable processing power of a mainframe when it comes to sifting through voluminous data.
AI can help consumers by providing a form of "legal service" to clients who might otherwise not be able to afford a lawyer. The free service DoNotPay, created by a 19-year-old, is an AI-powered chatbot that lets users contest parking tickets in London and New York. In its first 21 months, it took on 250,000 cases and won 160,000 of them, saving users more than $4 million worth of fines. The same program is helping consumers file databreach-related suits against Equifax for up to $25,000--though it can't help them litigate their cases.
What AI Can't Do
According to Yuen Thio, AI can't yet replicate advocacy, negotiation, or structuring of complex deals. The New York Times suggested that tasks like advising clients, writing briefs, negotiating deals, and appearing in court were beyond the reach of computerization, at least for a while. AI also isn't yet very good at the type of creative writing in a Supreme Court brief. Or a movie script.
How Are Lawyers Using AI?
Lawyers are already using AI to do things like reviewing documents during litigation and due diligence, analyzing contracts to determine whether they meet pre-determined criteria, performing legal research, and predicting case outcomes.
Document review for litigation involves the task of looking for relevant documents--for example, documents containing specific keywords, or emails from Ms. X to Mr. Y concerning topic Z during March, 2016. Setting search parameters for document review doesn't require AI, but using AI improves the speed, accuracy, and efficiency of document analysis.
For example, when lawyers using AI-powered software for document review flag certain documents as relevant, the AI learns what type of documents it's supposed to be looking for. Hence, it can more accurately identify other relevant documents. This is called "predictive coding." Predictive coding offers many advantages over old-school manual document review. Among other things, it:
- leverages small samples to find similar documents
- reduces the volume of irrelevant documents attorneys must wade through
- produces results that can be validated statistically
- is at least modestly more accurate than human review
- is much faster than human review
Predictive coding has been widely accepted as a document review method by US courts since the 2012 decision in Da Silva Moore v. Publicus Groupe.
Clients need to analyze contracts both in bulk and on an individual basis.
For example, analysis of all contracts a company has signed can identify risks, anomalies, future financial obligations, renewal and expiration dates, etc. For companies with hundreds or thousands of contracts, this can be a slow, expensive, labor-intensive, and error-prone process (assuming the contracts aren't already entered into a robust contract management system). It's also boring for the lawyers (or others) tasked with doing it.
On a day-to-day basis, lawyers review contracts, make comments and redlines, and advise clients on whether to sign contracts as-is or try to negotiate better terms. These contracts can range from simple (e.g., NDAs) to complex. A backlog of contracts to review can create a bottleneck that delays deals (and the associated revenues). Lawyers (especially inexperienced ones) can miss important issues that can come back to bite their clients later.
AI can help with both bulk and individual contract review.
At JPMorgan, an AI-powered program called COIN has been used since June 2017 to interpret commercial loan agreements. Work that previously took 360,000 lawyer-hours can now be done in seconds. The bank is planning to use the technology for other types of legal documents as well.
Some AI platforms, such as the one provided by Kira Systems, allow lawyers to identify, extract, and analyze business information contained in large volumes of contract data. This is used to create contract summary charts for M&A due diligence.
The company I work for, LawGeex, uses AI to analyze contracts one at a time, as part of a lawyer's daily workflow. To start with, lawyers set up their LawGeex playbooks by selecting from a list of clauses and variations to require, accept, or reject. For example, a California governing law clause might be OK, but Genovian law isn't. Then, when someone uploads a contract, the AI scans it and determines what clauses and variations are present and missing. The relevant language is highlighted and marked with a green thumbs-up or a red thumbs-down based on the client's preset criteria.
In-house lawyers use LawGeex to triage standard agreements like NDAs. Contracts meeting pre-defined criteria can be pre-approved for signature; those that don't are kicked to the legal department for further review and revision.
Any lawyer who's ever done research using Lexis or Westlaw has used legal automation. Finding relevant cases in previous eras involving the laborious process of looking up headnote numbers and Shepardizing in paper volumes. But AI takes research to the next level. For example, Ross Intelligence uses the power of IBM's Watson supercomputer to find similar cases. It can even respond to queries in plain English. The power of AI-enabled research is striking: using common research methods, a bankruptcy lawyer found a case nearly identical to the one he was working on in 10 hours. Ross's AI found it almost instantly.
Lawyers are often called upon to predict the future: If I bring this case, how likely is it that I'll win -- and how much will it cost me? Should I settle this case (or take a plea), or take my chances at trial? More experienced lawyers are often better at making accurate predictions, because they have more years of data to work with.
However, no lawyer has complete knowledge of all the relevant data.
Because AI can access more of the relevant data, it can be better than lawyers at predicting the outcomes of legal disputes and proceedings, and thus helping clients make decisions. For example, a London law firm used data on the outcomes of 600 cases over 12 months to create a model for the viability of personal injury cases. Indeed, trained on 200 years of Supreme Court records, an AI is already better than many human experts at predicting SCOTUS decisions.
How Will AI Affect the Legal Profession?
A consensus has emerged that AI will significantly disrupt the legal market. AI will impact the availability of legal sector jobs, the business models of many law firms, and how in-house counsel leverage technology.
According to Deloitte, about 100,000 legal sector jobs are likely to be automated in the next twenty years. Deloitte claims 39% of legal jobs can be automated; McKinsey Global Institute estimates that 23% of a lawyer's job could be automated. Some estimates suggest that adopting all legal technology (including AI) already available now would reduce lawyers' hours by 13%.
How Law Firms are Responding to AI
Law firms are notoriously slow to adapt to new technologies. Enhancing efficiency is often seen as contrary to the economic goal of maximizing billable hours. Lawyers are also seen as being techno-phobic.
However, many law firms are trying to understand and use new legal technologies, including AI. According to the London Times, "[t]he vast majority of the UK’s top 100 law firms are either using artificial intelligence or assessing the technology." Firms adopting AI systems include Latham & Watkins, Baker & McKenzie, Slaughter & May, and Singapore's Dentons Rodyk & Davidson.
Ron Dolin, a senior research fellow at Harvard Law School's Center on the Legal Profession, says that traditional law firm business models based on armies of first year associates racking up billable hours doing M&A contract review are doomed by the advent of AI. This isn't necessarily bad news for junior associates--or at least for the ones who still have jobs--as many hated doing contract review in the first place.
Firms that fail to take advantage of AI-powered efficiencies may lag in competing with those who do--at least to the extent clients insist on fixed-rate billing.Thus, lawyers who understand technology, and educate themselves about the latest legaltech developments. may be of increasing value to their firms.
How In-House Counsel Are Using AI
Corporate counsel have obvious reasons to adopt AI. Unlike attorneys in law firms, corporate counsel have no incentive to maximize their hours. Indeed, many lawyers go in-house to improve their work-life balance, which includes getting home at a reasonable hour. They're also often subject to strict budget and headcount constraints, so they have to figure out how to get more done with limited resources. AI helps in-house lawyers get home earlier without increasing their departmental budgets.
AI and the Future of the Legal Profession
The ABA Model Rules of Professional Conduct ("Model Rules") require that lawyers be competent--and that they keep up with new technology. As Comment 8 states:
To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology...
At least 27 states have adopted some form of this Model Rule. In January of 2017, Florida became the first state to require technology training as part of its continuing legal education requirement. Other states seem likely to follow suit. Indeed, failing to use commonly available technology, like email and e-discovery software, can be grounds for a malpractice claim or suspension by the bar.
Of course, AI-powered legal automation is not yet common. But it soon will be.Spending on AI is expected to grow rapidly--from $8 billion in 2016 to $47 billion in 2020--as AI is seen as reducing costs and increasing efficiency. Top MBA programs already have courses on how managers can use AI applications.
As they come to rely on AI, C-level executives may expect that their inside and outside lawyers are also up-to-speed.