Perhaps the saying, “There is no such thing as bad publicity,” holds true for controversial technology companies. New York-based Clearview AI has been criticized by privacy advocates for years because of the way it has scraped billions of images from social-media networks to build a search engine for faces used by police departments. It was the subject of a New York Times investigation, and several countries including France and Canada have banned the company.
Still, at least 600 law-enforcement agencies have used its technology, and this week Clearview revealed it had offered the government of Ukraine free access to its “facial network” to help stave off the Russian invasion.
Ukraine’s Ministry of Defense has not said how it will use the technology, according to Reuters, which first reported on the news citing Clearview Chief Executive Officer Hoan Ton-That as its main source. Ukraine’s government has also not confirmed that it was using Clearview, but Reuters reported that its soldiers could potentially use the technology to weed out Russian operatives at checkpoints. Out of Clearview’s database of 10 billion faces, more than 2 billion come from Russia’s most popular social-media network, Vkontakte, allowing the company to theoretically match many Russian faces to their social profile.
Ukraine has received several offers of help from the tech world, including from Elon Musk and satellite operator MDA Ltd. But Clearview’s offer to Ukraine has, rightly, caused outrage among privacy campaigners. Chief among the concerns is that facial recognition makes mistakes. It is bad enough when that leads police to make a wrongful arrest. In a war zone, there are even greater life and death consequences.
There is evidence that
Read more on tech.hindustantimes.com