Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact.
Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week. The agency has a policy against using facial recognition technology to surveil people exercising "constitutionally protected activities" such as protesting, according to the report.
"If someone is peacefully protesting and not committing a crime, we cannot use it against them," Miami Police Assistant Chief Armando Aguilar told NBC6. But, Aguilar added, "We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future."
An attorney representing the woman said he had no idea how police identified his client until contacted by reporters. "We don't know where they got the image," he told NBC6. "So how or where they got her image from begs other privacy rights. Did they dig through her social media? How did they get access to her social media?"
Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.
New York City Mayor Bill de Blasio promised on Monday the NYPD would be "very careful and very limited with our use of anything involving facial recognition," Gothamist reported. This statement came on the heels of an incident earlier this month when "dozens of NYPD officers—accompanied by police dogs, drones and helicopters" descended on the apartment of a Manhattan activist who was identified by an "artificial intelligence tool" as a person who allegedly used a megaphone to shout into an officer's ear during a protest in June.
Unclear view
The ongoing nationwide protests, which seek to bring attention to systemic racial disparities in policing, have drawn more attention to the use of facial recognition systems by police in general.
Repeated tests and studies have shown that most facial recognition algorithms in use today are significantly more likely to generate false positives or other errors when trying to match images featuring people of color. Late last year, the National Institute of Standards and Technology (NIST) published research finding that facial recognition systems it tested had the highest accuracy when identifying white men but were 10 to 100 times more likely to make mistakes with Black, Asian, or Native American faces.
There's another, particularly 2020 wrinkle thrown in when it comes to matching photos of civil rights protesters, too: NIST found in July that most facial recognition algorithms perform significantly more poorly when matching masked faces. A significant percentage of the millions of people who have shown up for marches, rallies, and demonstrations around the country this summer have worn masks to mitigate against the risk of COVID-19 transmission in large crowds.
The ACLU in June filed a complaint against the Detroit police, alleging the department arrested the wrong man based on a flawed, incomplete match provided by facial recognition software. In the wake of the ACLU's complaint, Detroit Police Chief James Craig admitted that the software his agRead More – Source
[contf] [contfnew]
arstechnica
[contfnewc] [contfnewc]