As the public backlash against US police departments grows due to tactics that are perceived as overly aggressive and often brutal, companies doing business with law enforcement agencies are now rushing to distance themselves from them.
In the past week for example, Microsoft, Amazon and IBM have all announced that they will not make their facial recognition technology available to police.
Laws to govern use must be in place
Microsoft is the latest, with its president and chief counsel, Brad Smith, calling on US Congress to first regulate the way the technology is used.
“We’ve decided we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology,” Smith said.
Other tech giants have the same view
Amazon and IBM have previously made similar comments.
“We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology,” Amazon stated in a blog post. “We will continue to allow organisations … to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.”
The statement continued: “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology and, in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
How facial recognition technology works
The technology uses biometrics to map facial features from a photograph or video. It compares the information with a database of known faces to find a match. It can be used for access control to a building, for example, or by border control officials on the lookout for known terrorists or drug smugglers. It may also help rescue human trafficking victims and reunite missing children with their families.
But it is in day-to-day policing of civil rights protests, notably in the current Black Lives Matter protest action, that many people are seeing a more sinister use for the technology – particularly as there is no legal framework dictating how it may be used.
“We are terrified that so many of the images that are being posted on social media by protesters will be weaponised by police against them,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, which is pushing for limits on the technology in New York.
“It’s just deeply chilling to think that engaging in protected activity, exercising your most fundamental rights, could end you up in a police database,” he told US-based CBS News.
Tech less effective with darker skins
Canada’s CTV News reports that facial-recognition systems have also met criticism for incorrectly identifying people with darker skin.
This is backed up by CBS News. “Research shows these errors aren’t aberrations. An MIT (Massachusetts Institute of Technology) study of three commercial gender-recognition systems found they had errors rates of up to 34% for dark-skinned women – a rate nearly 49 times that for white men,” the network said.