Submit to Digest

Approaches to Regulating Government Use of Facial Recognition Technology

Reports National Security Privacy

Much has been said about the discriminatory nature of facial recognition technology (FRT): FRT exhibits racial bias, FRT misidentifies women at higher rates, FRT is discriminatory by design. As a result, particular attention has been paid to government use, particularly law enforcement use, of FRT. The U.S. government is a big buyer of FRT—it has awarded $76 million in FRT-related contracts over the past twenty years. Following an increased awareness of FRT after law enforcement surveillance of Black Lives Matters protesters, various states and municipalities have made efforts to regulate government use of this technology, including Massachusetts, Vermont, New Orleans, and San Francisco. Various other states, such as Nebraska and Colorado, have considered similar proposals as well. These proposals have been divided into three categories: moratoria, bans, and more nuanced regulatory bills.

A number of states and cities have issued moratoria on government use of FRT. These moratoria have taken different forms. For example, California issued a moratorium on the use of FRT by police departments on body camera footage in 2019. That moratorium was not extended and ended on January 1, 2023. Massachusetts, on the other hand, issued a more general moratorium on FRT use that can only be repealed by “express statutory authorization” from the legislature.

Other states have issued bans on government use of FRT for all uses. For example, Virginia had initially issued one of the most restrictive bans in the country—prohibiting law enforcement from using FRT at all in any context—though it has since changed its regulatory approach. Similarly, San Francisco has a ban on all government use of “facial surveillance” technology. This is the simplest approach and completely eliminates the possibility of using FRT in law enforcement contexts.

As a regulatory approach, moratoria and bans are fairly heavy-handed, and other states and cities have taken a more nuanced regulatory approach. For example, New Orleans currently has a ban on any surveillance use of FRT, but permits its use in the context of a variety of violent crimes with supervisor permission (although the list of crimes also includes simple robbery and purse-snatching). Virginia’s approach is broader still, permitting law enforcement to deploy FRT in a broad range of contexts, including identifying potential suspects, victims of a broad range of crimes, and “lawfully detained” individuals. Many have argued for more tailored regulations, including requiring database restrictions (limiting which databases can be subjected to FRT scans based on image quality and requiring permission to access the databases).

Over the past few years, some cities and states have changed their approaches to FRT, particularly the jurisdictions that had initially issued complete or near complete bans on government FRT use. Virginia lifted its blanket ban and is now authorizing law enforcement to use FRT for a broad range of activities, despite public outcry by civil rights groups. New Orleans similarly lifted a broad ban on government FRT use and instead implemented a regulatory structure permitting law enforcement to use FRT in violent crime cases. California did not extend its prohibition on applying FRT to bodycam footage. These changes suggest a trend away from the prohibition of FRT, despite the technology still facing the same criticisms about discrimination.

Meanwhile, the federal government has not yet made progress on passing a FRT or biometric data bill. There have been proposals, such as the Facial Recognition Act of 2022 and the earlier 2020 call for a moratorium, but none progressed. Most recently, Senator Ed Markey (D-MA), alongside a handful of other congressmembers, recently introduced legislation that would place a moratorium on government use of FRT that could only be lifted by statute, similar to the approach taken by Massachusetts. The “Facial Recognition and Biometric Technology Moratorium Act of 2023,” in addition to establishing a moratorium, creates a cause of action for individuals to sue the federal government in the event an official violates the moratorium.

The Executive Branch has taken limited action, though the Biden Administration has expressed interest in the regulation of algorithmic and biometric technologies. In February 2023, the White House issued Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government, which particularly calls on agencies to “protect the public from algorithmic discrimination.” This order could be the starting point for additional regulatory action, including guidance and rulemaking on agency use of FRT and other discriminatory technologies.

There are clear benefits to establishing a federal regulatory scheme for FRT. Regulation would likely increase public confidence in the U.S. government’s respect for citizen privacy. In addition, creating a nationwide approach minimizes confusion about what evidence is permissible in court, ensuring uniformity and consistency for all people regardless of their state of residence. The federal government also has access to much more data with a broader scope than individual states, meaning it can make more accurate decisions about whether a technology is discriminatory than a state with a narrower view and more limited resources. On the other hand, as with any federal regulatory scheme, a nationwide approach could ignore special circumstances in certain states—higher crime rates, poorly funded police departments—or preempt stricter protections implemented by states regarding law enforcement use of FRT.

Permitting the states to continue taking their state-by-state approach to FRT regulation similarly poses challenges and benefits. Allowing states to choose whether to regulate FRT at all leaves people in certain states completely unprotected from FRT use, and people in certain states may be protected more than others. In addition, if the federal government takes no action at all to regulate FRT, federal agencies can use the technology as much and in any context (within existing Constitutional and statutory constraints). Given the trend toward deregulation in the states, without a parallel improvement in the technology’s discriminatory effects, advocacy groups argue a federal regulatory scheme (or a federal ban on government use of FRT) is increasingly necessary to protect the civil rights of U.S. citizens.

Potential FRT regulation is an area to watch alongside other data privacy concerns. As public awareness of data privacy issues continues to rise and commercial FRT becomes more popular, more citizens—and therefore regulators—are likely to become concerned, particularly given that it is not possible to encrypt a face the same way it is to encrypt a phone.