London is one of the most watched cities in the world: Its inhabitants are caught on camera about 300 times a day on average, and the British capital has become a testbed for police use of live facial recognition. But the technology, which powers a multibillion-dollar market for security firms and building management, has troubling limitations. To show it up even more, a special team of human officers have, anecdotally, been doing a better job than the cameras.
London’s Metropolitan Police conducted 10 trials of live facial recognition from 2016 to 2019, using face-matching software from Japanese technology firm NEC Corp. and cameras mounted on surveillance vans and poles. But the system made positive matches in just 19% of the cases, according to an independent study of the trials by the University of Essex. The majority of the time, the software was wrong.
There were other problems, according to Pete Fussey, who co-authored the study. Police had what he called a “deference to the algorithm,” tending to agree with whatever the software suggested. “I saw that so many times,” he says. “I was in a police van and I remember people saying ‘If we’re not sure, we should just assume it’s a match.’” Fussey also never saw an officer’s assessments overturned if they agreed with the software. Instead, they were often overturned if they disagreed with the system.
A spokeswoman for the Met said that facial recognition searches “can be part of a careful investigative process with any match being an intelligence lead for the investigation to progress.” She declined to say how many arrests had been made as a result of the technology.
Fortunately, there may be a human answer. One evening during the trials, when officers were parked near a
Read more on tech.hindustantimes.com