advertisement
advertisement

Why schools need to abandon facial recognition, not double down on it

Schools should stop considering adding surveillance tech to their hallways because it disproportionately harms students of color.

Why schools need to abandon facial recognition, not double down on it
Fifth-grade students sit distanced from each other, separated by plexiglass, at Caroline G. Atkinson School in Freeport, New York. [Photo: Reece T. Williams/Newsday RM via Getty Images]
advertisement
advertisement
advertisement

With the loosening of COVID-19 restrictions and the end of summer quickly approaching, schools are preparing to welcome students back into their classrooms for in-person learning. With that transition comes the return of a troubling trend in education—the monitoring of students through facial recognition systems, as well as the consideration of using the technology to enforce existing school discipline policies.

advertisement
advertisement

The number of schools deploying these tools threatens to grow in the fall as many consider using federal COVID-19 relief funds to purchase facial recognition equipment. The use of this technology disproportionately harms students of color and undermines schools’ commitments to providing equitable and safe learning environments. School districts must expel this flawed and biased technology from our schools, not double down on it.

Welcoming facial recognition into our children’s classrooms creates situations ripe for discrimination based on flimsy science. Emerging research is clear that facial recognition technology is inaccurate and reproduces age, race, and ethnicity biases. It also performs more poorly on children as compared to adults due, in part, to facial changes that occur during adolescence. Yet companies continue aggressively marketing facial recognition as a cost-effective public safety solution without disclosing these tools’ inaccuracies and racial and gender biases.

Some companies are also pitching so-called affect recognition in schools, which uses facial recognition technology to assess students’ emotions. But affect recognition is merely a digital repackaging of the debunked and discredited racial stereotypes underpinning physiognomy, phrenology, and other historic manifestations of “scientific racism.”

advertisement
advertisement

Affect recognition technology relies on the flawed premise that observable differences in physical characteristics among individuals and groups can be measured, quantified, and interpreted in ways that offer insights into a person’s intellect, morality, or trustworthiness. As such, affect recognition falsely assigns scientific significance to racial differences in ways that reproduce racial hierarchy and social inequality. A person’s character, emotions, and “risk level” cannot be discerned from their body or face.

Technological flaws and limitations, however, are not the only concern. Facial recognition technology represents an unprecedented expansion of monitoring and surveillance. Constant surveillance can increase student anxiety and stress, particularly if captured data is being used for student assessment, monitoring, and disciplinary decisions.

Research in similar contexts has revealed that closed-circuit television (CCTV) surveillance creates a chilling effect for people who fear their actions may be misinterpreted. The same concerns are equally applicable in schools. Students may wrestle with basic questions about who they interact with and how their peer engagement is perceived, especially if those actions or associations could lead to increased scrutiny. Students may also change or otherwise tailor emotional reactions to avoid those reactions being captured and followed by increased monitoring. Whether for surveillance, assessing emotions, or other objectives, the pervasive influence of facial recognition technology will fundamentally redefine and negatively impact students’ experience in school.

advertisement

But the harms will not be borne equally by students. Relying on surveillance tools that are inaccurate and reproduce racial biases will cast an undue level of suspicion on Black and brown students. Such an outcome perpetuates the dangerous idea that students of color are increased “threats” that need to be managed, rather than educated. Nor can this expanded surveillance be divorced from the ongoing crisis of the school-to-prison pipeline. Making assessments and disciplinary decisions based on facial recognition technology will exacerbate preexisting racial disparities in suspensions, expulsions, school-based arrests, and other forms of exclusionary school discipline.

Additionally, harvested and stored data about students also carries the inherent risk of being used by other agencies, including law enforcement, family welfare, and immigration. Collectively, deployment of facial recognition technology creates an environment that criminalizes students of color and denies these young people the opportunity to learn.

All these concerns are further heightened in schools where law enforcement would have direct access to facial recognition technology and harvested data. Many young people of color already struggle in overpoliced, hyper-surveilled communities, and are subject to racially biased policing practices and tactics from an early age. In places like Chicago, Los Angeles, and New York, police departments are deploying the same types of tech-driven monitoring and surveillance that students, especially Black and brown students, seek refuge from in their schools.

advertisement

For these and many other reasons, school districts must move swiftly to ban the use of facial recognition technology as part of a larger commitment to ensuring students feel safe in school. Bans would help mitigate practices that risk reproducing racial disparities in discipline and subjecting Black and brown students to increased and unwarranted scrutiny. But that is just the first step.

Community stakeholders can become a powerful voice in pressuring their local governments to join the growing list of cities and states that have banned police use of facial recognition technology. Students—children—should not be monitored by surveillance technology that is flawed, reproduces biased outcomes, and is ill-equipped to do anything beyond erode public trust and safety.


John S. Cusick is a litigation fellow at the NAACP Legal Defense Fund, primarily working on police misconduct, criminal justice, and voting cases and advocacy.

advertisement

Clarence Okoh is an equal justice works fellow at the NAACP Legal Defense Fund. His fellowship project seeks to challenge the discriminatory use and impact of artificial intelligence and machine-learning technologies on communities of color and low-income communities.