Microsoft Corp. willstop selling artificial intelligence-based facial-analysis software tools thatinfer a subject’s emotional state,gender, age, mood and other personal attributesafter the algorithms were shown to exhibit problematic bias and inaccuracies.
Existing customers of the tools can keep using them for a year before they expire. The company is also limiting the use of other facial-recognition programsto ensure the technologies meetMicrosoft’s ethical AI guidelines. New customers will need to apply for access to facial-recognition features in MicrosoftAzure Face API, Computer Visionand Video Indexer, while current customers have a year to apply for continued access. The changes were outlinedwith the release of the second update to Microsoft’s Responsible AI Standard, in blogs written by Chief Responsible AI Officer Natasha Crampton and Azure AI Product Manager Sarah Bird.
The changes come two years after Microsoft and Amazon.com Inc., whose cloud unit competes with Azure, paused sales of facial-recognition technology to U.S. police agencies in the wake of research showing it performed poorly on subjects with darker skin. Some states have passed laws governing the use of such products, including Washington, where both tech companies are headquartered. Even as some ofthe biggest technology companies back away from the controversial technology, smaller companies such asNEC Corp. and Clearview AI maintain robust businesses selling facial-recognition tools for use in ways that raise privacy and security questions, including by law enforcement.
Microsoft isn’t doing away completely with the use of AI to help read human reactions. The company continues to add other features that make guesses about people’sfeelings or
Read more on tech.hindustantimes.com