Submit to Digest

The Promise and Peril of AI Legal Services to Equalize Justice

Commentary

Decades ago, labor regulators predicted that routine factory work could be reduced to a set of computerized functions. At the same time, they assumed that the work of white-collar professionals, like lawyers, could not be digitized. Law is a complex and dynamic field, complicating the task of those seeking to automate it. However, new developments in artificial intelligence (“AI”) promise to fulfill many functions of a lawyer and democratize the law.


Background

The digitization of legal services could ameliorate America's problem with access to justice. Due to the escalating cost of lawyers and the growing complexity of the law, more people are effectively locked out of justice every year. Nearly 92% of impoverished Americans, or 36 million people, cannot afford to hire a lawyer for a civil suit. This “judicial deficit” perpetuates poverty and compromises the fundamental rights lawyers are meant to protect, affecting issues spanning from eviction to healthcare to domestic abuse.

Court dockets today are checkered with power asymmetries—impoverished Americans are regularly ambushed by rich corporations and individuals who can afford multiple lawyers. The complexity and obscurity of the law means the targets of these efforts are unable to adequately defend themselves without legal counsel, allowing corporate legal abuses to go unaired and unchecked. Making matters worse, injustices from these lawyerless courts disproportionately affect women and people of color.

This article assesses new developments in AI legal services. It first explores the potential of artificial intelligence to either democratize access to legal services or how without proper treatment, it may only reinforce existing inequalities. Finally, the article outlines specific reforms to address AI’s perils.

Promises of AI Legal Services

Even at this experimental stage, over 280 companies have started developing legal technology. Companies in this space have already raised over $757 million and filed for 1,369 legal machine-learning patents.

Automated legal systems have the capability to handle legal files in a matter of seconds. A recent AI system, Intelligent File 1.0, can automatically file and organize legal documents. Apps such as Rocket Lawyer are already helping impoverished Americans by instantly completing legal paperwork, such as business contracts, real estate agreements, and wills. The technology behind these systems simplifies complex legal doctrines and formalities, mitigating structural barriers to understanding the law without a lawyer and completing even the simplest legal tasks.

Beyond simplifying documentation, AI can also answer legal questions and offer assistance at low costs. Self-help chatbots empower low-income individuals to take their civil issues to court by providing immediate legal information about their specific case or situation. These chatbots are designed to advise clients about their rights, legal strategies, and procedures in civil court.

A new chatbot app, rAInbow, can also identify areas of legal protection for potential victims of domestic violence. Powered by machine learning, technologies like rAInbow can help victims become aware of their rights and demystify confusing legal terminology.

The website Do Not Pay overturned over 100,000 speeding tickets, saving low-income Americans millions of dollars. Luis Salazar, a bankruptcy lawyer, tested new legal software against his own skills, and the results, he said, “blew me away.” A machine could quickly produce a simple two-page memo and analyze a complex legal problem very similar to what a human lawyer could produce.

Skeptics of legal automation argue that these emerging programs are disruptive agents that will displace lawyers. Richard Susskind, a lawyer, rebuts these concerns, arguing that lawyers and technology can work alongside each other. Legal technology can help law firms by speeding up mundane, time-consuming tasks and allowing lawyers to focus on more challenging, creative endeavors. Susskind argues automation will never replace a lawyer’s strategy, logic, creativity, and empathy — machine learning can only supplement them.

Impoverished Americans are losing their houses to eviction, their financial rights to corporate abuses, and their children to custody battles because they can neither afford lawyers nor effectively navigate complex law. The power of AI lies in its ability to sift through hundreds of cases and simplify the law. As Congress fails to act to protect the rights of underserved Americans, legal technologies can ameliorate the issue, transforming and expanding access to justice.

Perils of AI Legal Services

Legal technology is at an inflection point. Still in an experimental and developmental phase, this technology must be steered and regulated to minimize future negative outcomes. While many scholars have decried legal AI as dangerous in displacing lawyers, only a few have recognized its capacity to actually widen the justice gap.

Experts have warned of the imbalance and underappreciated consequences of automated legal services. Drew Simshaw, an assistant professor at the Gonzaga University School of Law, writes that legal AI could create an inequitable “two-tiered system.” Patricia Barnes, an attorney and former judge, warns that AI used in law firms exacerbates “inequality in discrimination lawsuits.” Representative Ted Lieu has recently called for regulation given the heightened influence of elites on AI.

In its current state, legal AI presents three main barriers to justice. First, high-quality AI may be expensive and thus only available to larger law firms, presenting a power asymmetry between law firms and individuals. Second, many impoverished Americans and people of color may be unable to access any AI in the first place. Third, the advent of legal AI may lead Congress to believe that impoverished individuals no longer need human civil lawyers, thereby halting movement on a long-requested right to civil counsel.

Unregulated legal AI locks law firms into a mutually reinforcing cycle that only makes rich firms richer and widens revenue gaps between firms. Larger law firms are often better equipped to adopt emerging legal technologies; advanced AI is costly to obtain and adopt, and is thus only available to wealthy firms who have the necessary capital and funding capacity to pursue it. These technologies not only automate time-consuming tasks but also assist in creative and analytical tasks. As larger law firms adopt emerging legal AI and engage in a long-term trial and error process, they maximize benefits gained from the AI, all with a safety net. Smaller law firms do not have this privilege and will be vulnerable when they adopt cheap, fully-developed AI in the future. Using higher-quality AI, larger law firms can extend more service to elite individuals, but likely not to those detrimentally affected by the justice gap. By automating administrative tasks, national firms can also expand in size and geography. By contrast, smaller firms are left in less efficient and more self-reliant positions because they do not have the organizational resources to leverage emerging legal AI.

Ultimately, such technological disparities between law firms are passed on to nonlegal segments of society. Individual lawyers representing lower- to middle-income Americans face a disadvantage against wealthy firms able to take advantage of AI technology and the superior work it can help produce.

Accessibility gaps in communications technology loom large, especially in line with age, race, geography, education, and income gaps. By one measure, one in five Americans do not have reliable internet access. There is also a technological gap — many would-be pro se litigants lack the “necessary skills and resources to make meaningful use of technologies.” Professor Simshaw also observes that “some prepaid internet service plans do not provide the broadband coverage needed to support emerging legal technology applications.” These technology gaps could functionally shut many vulnerable communities out of legal AI and justice systems.

Another issue within legal AI is a concern that algorithms may serve to exclude and antagonize marginalized groups. Broadly, “self-help” legal services must transcend a one-size-fits-all model. These services must accommodate the groups that are most affected by the deep fissures in America’s justice system. For one, most digital legal services are not multilingual or otherwise do not offer services in many languages — an especially concerning exclusion given that non-English speakers are a significant chunk of lawyerless litigants. Sherley Cruz also highlights the importance of “accounting for different cultures’ communication styles.” When impoverished individuals are providing their information to self-help AI services, information-gathering systems must be able to input multiple storytelling formats. For example, people from cultures that do not typically use “free-flowing narratives” may struggle with answering the open-ended questions relied on by legal service providers. Likewise, current AI legal services do not appear to account for non-chronological storytelling and different forms of communication inputs beyond verbal/written forms existing in other cultures.

Using datasets from sources including scraped language and Reddit, AI chatbots that provide legal advice can sometimes produce overtly racist and biased responses. Amy Cyphert argues that these AI technologies produce these results specifically because they are trained this way and “should not be used” to the extent that they reinforce biased stereotypes and further marginalize users. The persistence of such bias in commercially available products reflects a lack of consideration and care for racial inequality in the development of these AI legal platforms. In not only reproducing but also automating inequalities, these algorithmic biases simply are not fit to close the justice gap.

The aforementioned inequalities could render low-income AI services available to impoverished Americans and amplify current power imbalances in civil court. If legal technology is the only affordable service available to impoverished Americans, this vulnerable population will be at the whim of those who control the technology; service providers could, predictably, overlook low-income sectors, disregarding the quality of legal service. Without quick intervention, America could soon normalize a lower-tier of justice in the form of low-quality artificial intelligence, wasting the technology’s equalizing potential.

Making matters worse, calls for free public lawyers will fade from public discourse as even lacking AI alternatives gain traction. Policymakers will likely abandon human-focused solutions, preferring a cheaper but subordinate digital solution. All hopes for future human-centered policy solutions would dissipate, dissolving into illusory but inferior legal technologies.

Many are quick to assume that regardless of potential inequalities, using legal AI will inevitably be an improvement. But what many do not see, is how digital legal services can prove to be structurally predatory and biased. Ineffective services harm many impoverished litigants: in her “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” Cathy O’Neil notes that AI algorithms “tend to punish the poor.” Specifically, Peter Yu writes that this “divide” could facilitate cultural and educational biases against the impoverished. Broadly, some legal service algorithms disfavor poorer Americans, the very people the system intends to protect.

These perils prevent stakeholders and impoverished individuals from gaining meaningful access to equal digital services and equal justice. Without careful calibration and a redesign, legal AI may only fuel existing barriers to meaningful justice access.

Regulatory Reforms

Existing regulation of legal practices fails to account for the rise of legal artificial intelligence. Without regulation, the future of legal AI may descend into an inequitable two-tiered system. To promote competition and calibration, regulatory innovation must parallel technological innovation. Regulators can embrace three main avenues to establish effective policies: transparency, competition, and regulatory sandboxes.

Transparency ensures that a small sector of technical experts is not the only source of critical AI systems. Transparency forges key relationships between lawyers and technologists. Lawyers can help effect meaningful changes in legal technology, such as integrating bias training, cultural consciousness, and other helpful features for clients. Further, transparency provides smaller lawyers with open access to developing AI. It provides them a channel of input to technologists to make AI more functional for smaller firms, equalizing the potential to seek justice across the board. Public transparency could also break through the AI “black box” which makes bias harder to detect. Indeed, increased transparency in access-to-justice AI tools can subject them to external review and subsequently decreased bias.

Other transparency regulations could ensure that low-income individuals are not the prey of low-quality digital legal services. Reporting accuracy rates of AI, for example, allows onlookers to verify the quality of legal services. Susan Fortney calls for certifications as a system to check artificial intelligence. Transparency regulations ultimately guarantee the effectiveness and quality of digital legal services, promising that poor Americans are not left with the bad end of the bargain.

Competition may counter the predicted consolidation of AI legal services in the near future. Regulatory policies, in response, must aim to boost competition and shut down legal AI monopolies. Competition is especially essential to push AI developers to improve their algorithms, make their services affordable, remove bias, and provide the most effective legal services. Here, competition functionally serves as another “check” on AI companies.

In most American jurisdictions, lawyers can invest in technology, but technology companies can not invest in legal practices. This creates an asymmetric dynamic wherein wealthier firms have the capital to invest in technology, but smaller firms can not. Lifting these investment laws could help smaller firms attract the interest of digital AI service providers. Current law prevents cross-industry relationships between smaller law firms and technologists, thereby cutting off an avenue for smaller firms to adopt new AI. Legal scholars, including Justice Gorsuch, have called for lifting ownership and investment restrictions. The best way to do this could be with a regulatory sandbox — an experimental area where certain restrictions are lifted but under close observation of an oversight body. Ryan Nabil finds that regulatory sandboxes can significantly increase the accessibility of digital justice tools. In 2020, Utah launched the first regulatory sandbox for legal services, and it was incredibly successful, making civil legal services widely affordable. Expanding similar regulatory sandboxes to other states can simultaneously expand access to AI legal services for impoverished Americans and smaller law firms, helping them overcome previous financial barriers.

As legal technology gathers momentum, an approach of “technology is better than nothing” will not suffice. Artificial intelligence shows promise for equalizing access to justice, but it also presents perils of exacerbating inequalities. Regulators must act soon to contain negative spillovers from legal technology to ensure it can shrink the justice gap, not enlarge it.