Submit to Digest

Under the Watchful, Unblinking Eye: Privacy Implications of the New York Police Department’s Deployment of Autonomous Robots

Reports Privacy

In late September 2023, a 400-pound fully autonomous robot began patrolling New York City’s Times Square subway station. The robot, a “Knightscope K5” (“K5”), is part of New York City Mayor Eric Adams’ latest efforts to reduce crime—an issue that was at the center of his 2021 mayoral campaign.

Since taking office in January 2022, Mayor Adams has pushed forward on his agenda while also repeatedly calling for city agencies to reduce spending. He has pointed out that the K5 costs $9 per hour to lease and doesn’t require “bathroom breaks” or “meal breaks” like a human officer would, making it a seemingly cost-effective method of bolstering law enforcement. Indeed, Knightscope, Inc., the robotics company behind the K5, describes the product as an “advanced, force-multiplying physical deterrent” that can effectively provide 24/7 autonomous services. Based on this description, the K5 does appear to be perfectly tailored to address Mayor Adams’ dual concerns.

However, civil rights advocates note that a glaring issue has been left out of the city’s cost-benefit analysis of the K5: its privacy risk. With features like license plate recognition and “360-degree eye-level video streaming and recording,” the K5 raises a host of concerns related to privacy, surveillance, and civil liberties. Notably, the New York Police Department’s (“NYPD”) deployment of the K5 comes at a time when various other technologies are being implemented in ways that hint at a gradual erosion of privacy and anonymity in public spaces. Late last year, it was revealed that Madison Square Garden Entertainment was using a secretive facial recognition system to single out individuals on its “exclusion list” and prevent them from attending events at its venues. The D.C. Metropolitan Police Department has been creating real-time surveillance maps to identify and monitor protestors engaging in First Amendment-protected activities. Finally, numerous airports across the country have started to phase in biometric scanners to replace manual passport checks.

While some may cheer at the prospect of a faster TSA line, the significant risk of inherent bias in these tools must not be overlooked. Studies have repeatedly shown that facial recognition algorithms have higher rates of inaccuracy for certain demographic groups, such as women and Black individuals. In one case, facial recognition systems were found to be 34% more likely to misidentify a darker-skinned woman than a lighter-skinned man. While these results are already concerning in a research context, they can bring severe consequences that reinforce prejudices in the real world. In addition, because police are more likely to arrest Black individuals for minor offenses, Black individuals are overrepresented in the mugshot databases used to train the facial recognition systems that direct law enforcement to make arrests. Together, these inequities lead to a self-perpetuating cycle of bias and harm, and there do not seem to be safeguards in place to prevent the NYPD’s new robots from contributing to it.

The K5 is not actively using facial recognition technology as of now, but its 4 cameras and 360-degree video streaming capability are still ripe for abuse. Even if the K5 itself is not scanning and identifying New Yorkers’ faces, privacy advocate Albert Fox Cahn, the Executive Director of the Surveillance Technology Oversight Project, says there is “no binding policy banning police from using facial recognition software on the images collected by the robots” (emphasis added). The K5’s other features, such as license plate recognition, also raise separate concerns. Not only can an automatic license plate reader make mistakes, it can also be leveraged to create permanent records of drivers’ routes and destinations. Given that the K5 is able to read 1,200 license plates a minute, the risk of widespread invasions of privacy seems salient.

Further, the severity of these issues is exacerbated by the fact that they may be the result of deliberate evasion of the Public Oversight of Police Technology (“POST”) Act, which was enacted in 2020 to enable greater oversight of the NYPD’s use of surveillance technologies. Under the POST Act, whenever the NYPD acquires a new technology, it is required to propose a draft impact and use policy for the technology at least 90 days before actual use and deployment. This requirement is designed to give members of the public a chance to review the policies and provide feedback that may be incorporated into revisions. Ideally, the impact and use policies should outline rules regulating the use of the technologies, explain what access the public or external entities will have to the technologies’ data, describe how the NYPD is training officers to properly use the technologies, and more. As the K5 undoubtedly falls within the purview of the POST Act, the NYPD should have undergone the prescribed notice-and-comment period and complied with other POST Act requirements before deploying the technology.

While it remains to be seen whether the K5 will lead to increased safety, American University law professor Andrew Ferguson worries that tools like autonomous robots are a part of the phenomenon of “security theater,” which refers to the implementation of security measures that are highly visible but perhaps minimally effective. Safety implications aside, the very presence of the K5 is likely to contribute to a subconscious normalization of increased surveillance in public spaces.