Mick Champayne | Face/Off06.07.2019
In a bold, perhaps prescient move, San Francisco has become the first city to ban facial recognition usage by police and government forces. Why other cities and governments would be wise to follow suit?
To know me is to know I love memes, especially when there’s an aspect of personalization. Last year when Google’s Arts & Culture app went viral, I was excited to find my doppelgänger in a famous work of art. The app uses image recognition to comb art collections from more than 1,200 museums, galleries, and institutions across the world and matches it with your selfie. But for me and everyone else in Chicago, that feature wasn’t available. With a little digging, I found out it’s most likely because Illinois has one of the US’s strictest laws on the use of biometrics, which include facial, fingerprint and iris scans.
In May of this year, San Francisco, one of the most tech-friendly cities in the world, made top headlines as it became the first US city to ban police and local government agencies from using facial recognition. This technology can detect and analyze human faces and compare traits against databases, such as matching a driver’s license picture to a mugshot in a criminal database. It’s also been used in more real-time situations, like monitoring crowds at protests, shopping malls, and concerts to identify potential suspects, without consent or participation.
Although there have been dramatic improvements in facial recognition, there are still concerns of accuracy and bias, and the potential injustices that may be caused.
"With this vote, San Francisco has declared that face surveillance technology is incompatible with a healthy democracy and that residents deserve a voice in decisions about high-tech surveillance," Matt Cagle from the American Civil Liberties Union (ACLU) in Northern California told BBC News.
The ban is a bold move and hopefully, just the alarm bell we need to start a much-needed public dialogue about the future repercussions of this relatively new technology. Although there have been dramatic improvements in facial recognition, there are still concerns of accuracy and bias, and the potential injustices that may be caused. Studies have shown that the systems produce high rates of false positives towards people of color and women because the datasets being used to train the software have been disproportionately male and white.
In recent years, we’ve been conditioned to implicitly trust facial recognition technology. Using it for commercial purposes has become common.
When it comes to citizens’ privacy, the legislation's author, Supervisor Aaron Peskin says, “Facial recognition technology is uniquely dangerous and oppressive. Unlike other technologies, we cannot hide our faces or change what we look like.” And proceeding with technology without understanding all of the implications and risks involved is hugely irresponsible.
In recent years, we’ve been conditioned to implicitly trust facial recognition technology. Using it for commercial purposes has become common, from Facebook automatically tagging your friends’ faces in pictures, to Snapchat filters to make us look cute, to the latest iPhone’s FaceID feature that makes unlocking your phone a breeze. But while this technology has been marketed as a convenience, we should be cautious to adopt it so naively.
For instance, the New York Times reported that China is already using advanced facial recognition to track a Muslim minority called the Uighurs. “The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review.” This technology has since been used as a tool to round up Uighurs and detain them in internment camps.
New and emerging technology is evolving exponentially, and we can see how far behind laws and regulation lag. But until governments can catch up, is there a way for citizens to safeguard their future?
New and emerging technology is evolving exponentially, and we can see how far behind laws and regulation lag.
One speculative designer imagines combatting surveillance with fashion. Adam Henry, an artist and technologist from Berlin, designed a series of patterned textiles designed to confuse specific facial recognition algorithms. He created it for interaction design studio Hyphen Labs' NeuroSpeculative AfroFeminism, a project that examines how black women will interact with technology in the future, exploring themes of black womanhood, technology, security, protection, and visibility. “Instead of seeking computer vision anonymity through minimizing the confidence score of a true face, HyperFace offers a higher confidence score for a nearby false face by exploiting a common algorithmic preference for the highest confidence facial region,” explains Harvey. “In other words, if a computer vision algorithm is expecting a face, give it what it wants.”
Luckily for us, maybe we won’t have to resort to band-aiding privacy concerns with special fabric. Other cities and governments are starting to take note, and beginning to implement regulation before it’s too late. Cities like Oakland, California, and Somerville, Massachusetts have started to lead efforts for their own bans, and the conversation is gaining more traction. With facial recognition's ubiquity becoming increasingly apparent, privacy advocates see 2019 as a potential turning point. Digital media scholar Luke Stark, who works for Microsoft Research Montreal, likens facial recognition to Plutonium. “To avoid the social toxicity and racial discrimination it will bring,” he says, “facial recognition technologies need to be understood for what they are: nuclear-level threats to be handled with extraordinary care.”