Submit to Digest

Facial Recognition Technology’s Impact on Racial Injustice


Beginning late this spring, protests around the United States have led to a renewed reckoning on the injustices faced by the Black community, especially with regard to misconduct by and lack of accountability within law enforcement. The conversation has extended to the realm of technology and, in particular, the use of facial recognition software by police departments. In June 2020, tech giants IBM, Microsoft, and Amazon announced that they would pause or discontinue the sales of their facial recognition technology to law enforcement departments in the United States. Also in June 2020, the George Floyd Justice in Policing Act was introduced in Congress, proposing limitations on the use of facial recognition technology in police body camera footage. Local governments in a few cities, including Boston and San Francisco, have already banned the use of facial recognition technology in their police departments.

Why is the use of facial recognition technology by law enforcement a practice that certain companies, lawmakers, and even law enforcement departments themselves are hesitant to embrace? Findings that facial recognition technology is less accurate on subjects with darker skin tones give rise to concerns around how the technology can be used to perpetuate racism in a system that “already disproportionately polices and criminalizes Black people,” the ACLU argues.

One way in which police departments currently use facial recognition algorithms is to identify suspects against mugshot databases. These mugshot databases are themselves racially biased, however. As Black people face arrest for minor offenses at far higher rates than white people, the databases are fed with significantly more information regarding the Black community. The use of facial recognition technology to then match suspects against these mugshot databases results in a perpetuating cycle of Black incarceration. Moreover, the biased data coupled with the inaccuracy of this technology has led directly to false arrests in some specific cases.

Even if facial recognition technology was not faulty and was equally accurate across races, it could still be used as a tool that exacerbates racism. Commentators at the RAND Corporation note that facial recognition technology contributes to the “underlying dilemma: the imbalance of power between citizens and law enforcement” on the question of privacy and mass surveillance. They argue that addressing that dilemma by focusing on police reform would be a more effective approach than a ban on the technology.

The ACLU characterizes facial recognition technology as but the newest mode of surveillance in a long and “pernicious” history of Black surveillance in the United States. This current system has roots in 18th century lantern laws, which singled out Black and Brown enslaved people and required that they carry candles with them after dark if not accompanied by a white person. In more recent years, the government has used surveillance to monitor political speech of Black Lives Matter activists and to spy on people suspected of drug offenses in the war on drugs, which predominantly targets Black people. The ACLU argues that the use of facial recognition technology by law enforcement would serve to further the injustices experienced by Black communities throughout history.

Stakeholders have a range of views on the treatment of facial recognition technology. The ACLU proposes a ban on the technology, while the RAND corporation believes our focus should be on police reform. Tech companies have committed to finding ways to ensure the technology is used responsibly. Overall, though, there’s been acknowledgment that facial recognition technology is uncharted territory that requires more regulation and careful consideration from lawmakers, especially given its current and potential impact on racial injustice.