Detroit police are changing their rules for facial recognition software after a “grainy photo made from poorly lit footage” led to the arrest of an innocent man for theft, said The Detroit Free Press. Robert Williams will receive a $300,000 settlement for the wrongful 2020 arrest, which he said “upended” his life. “My wife and young daughters had to watch helplessly as I was arrested for a crime I didn’t commit,” he said. Investigators will no longer be allowed to make arrests or conduct lineups based solely on software identifications.
But Detroit isn’t the only city implementing such restrictions. “The powerful but imperfect artificial intelligence technology” has led to at least a half-dozen wrongful arrests, said The Washington Post, which is why a growing number of states and cities are drawing boundaries around the use of facial recognition. That hasn’t always worked: Investigators in Austin and San Francisco reportedly skirted the rules by “asking for help from other law enforcement agencies that still have access.” Disclosure can be an issue. “Police are using it” to make suspect identifications, said Chesa Boudin, San Francisco’s former district attorney, “but not saying they are using it.”
What did the commentators say?
Facial recognition software can be “both racist and faulty” when confronted with blurry security footage, Robert Williams said in a first-person column for Time. At the time he was arrested, one database included 49 million photos, including every Michigan driver’s license photo “going back years.” That database wrongly linked security footage of an actual theft to Williams’ license. He spent 30 hours in jail. Now? Detroit police will train investigators on the “limitations and inaccuracies of facial recognition technology,” creating guardrails that help preserve civil liberties. That’s a “big step” toward ensuring other innocent people don’t lose their liberty.
Others aren’t so sure. “There is no safe way for police to use facial recognition,” the ACLU’s Nathan Freed Wessler said in The Sacramento Bee. A bill in California would regulate, not ban, the use of the technology by police. That isn’t enough. Investigations often “exacerbate and compound the unreliability of facial recognition technology” because the tech generates a list of faces that are similar — but don’t match — security footage. That naturally leads to problems. “Rather than being an asset to police investigations, facial recognition poisons them.”
What next?
Police say facial recognition is usually more helpful than hurtful. In Gary, Indiana, for example, investigators said this month that the technology helped them track down a suspect in a fatal gas station shooting. And the FBI has used facial recognition software to identify and find suspects in the Jan. 6 insurrection. Others are more cautious: Microsoft in May banned American police departments from using its artificial intelligence service for facial recognition, Quartz said, apparently over worries the tech could generate “false or nonsensical information.”
The two sides are sparring over the California bill, said CalMatters. A coalition of civil liberties groups is arguing facial recognition technology will “increase unnecessary police interactions that too often have the potential to escalate into fatal encounters.” But police agencies and their allies say regulation will help them use the technology appropriately. Still, one lobbyist said, “there’s a clear need to bolster public trust” in the use of facial recognition.