Microsoft is eliminating emotion recognition capabilities from its facial recognition technology

When Microsoft said this week it would be removing some emotion-related facial recognition technologies, the leader of its artificial intelligence projects warned that the science of emotions is far from complete.

Portrait of a young and beautiful woman and face recognition system

Natasha Crampton, Microsoft’s Chief Artificial Intelligence Officer, wrote in a blog post, “Public experts inside and outside the company have pointed out the lack of scientific consensus on the definition of ’emotion’ and the challenges associated with generalizing conclusions across use cases , geographies, and demographics, as well as heightened privacy concerns associated with this type of feature. »

Microsoft’s action, part of a larger announcement of its “Responsible AI Standard” campaign, became the most prominent case of a company abandoning emotion-detection AI, a very modest technology that has been the subject of considerable academic criticism.

To automatically assess a person’s emotional state, emotion-recognition technology often looks at a variety of characteristics, including facial expressions, voice tone, and word choice.

Many technology companies have developed software for business, education, and customer service that claims to read, recognize, or quantify emotions.

One such technology is designed to provide real-time analysis of callers’ emotions so call center agents can adjust their behavior accordingly. Students’ emotions during classroom video chats are monitored by a secondary service to allow teachers to gauge their performance, interest and participation.

This technique has been met with skepticism for many reasons, including questionable effectiveness. Sandra Wachter, associate professor and senior researcher at the University of Oxford, said emotional AI has “at best no scientific basis and at worst is complete pseudoscience”. She called the implementation in the private sector “very worrying”.

Like Mr. Crampton, she pointed out that the vagueness of the AI’s feelings is far from her only flaw.

“Even if we find evidence that AI is capable of accurately predicting emotions, that would not justify its use,” she said. Our innermost thoughts and feelings are protected by human rights such as the right to privacy.

It’s unclear how many big tech companies are using technology to read emotions. In May, more than 25 human rights organizations published a letter urging Zoom CEO Eric Yuan not to use emotion AI technology.

The letter was sent after a report by technology news site Protocol suggested Zoom could adopt the technology based on its recent research in the field. Zoom did not provide comment upon request.

In addition to questioning the scientific basis of emotional AI, human rights organizations have also argued that emotional AI is misleading and discriminatory.

Lauren Rhue, an assistant professor of information systems at the Robert H. Smith School of Business at the University of Maryland, discovered that emotional AI through two facial recognition software programs (including Microsoft’s) consistently interpreted black subjects as having more negative emotions than white subjects. An AI interpreted black people as angrier than white people, while Microsoft’s AI interpreted black people as more dismissive.

Microsoft’s policy changes are primarily aimed at Azure, its cloud-based platform for selling software and other services to businesses and organizations. Azure’s AI for identifying emotions, announced in 2016, said it could recognize “happiness, sadness, fear, anger, and more.” »

Microsoft has also committed to re-evaluating the emotion-recognition AI in all of its systems to determine the dangers and benefits of this technology in various areas. Microsoft plans to continue using Emotional AI in Seeing AI, which helps visually impaired people by verbally describing their surroundings.

Andrew McStay, professor of digital life and director of the Emotional AI Lab at Bangor University, said in a written statement that he would have preferred Microsoft to halt all emotional AI development. Since emotional AI is recognized as ineffective, he sees no need to continue using it in products.

“I’m quite excited to see if Microsoft will eliminate all forms of emotion and psychophysiological perception from all of its operations,” he wrote. It would be an easy win.

The promise of fairness in speech-to-text technology is another modification of the new standards. According to a study, the error rate of black users is about twice as high as that of white users. Microsoft has also banned the use of its custom neural voice, which allows for near-exact imitation of a user’s voice, for fear it could be used as a tool for deception.

Crampton noted that these changes were essential in part because of the lack of government control over AI systems.

“Artificial intelligence is becoming more and more important in our lives, but our laws are lagging behind,” she noted. “They haven’t grasped the particular dangers of AI or the demands of society. While there are signs that government action on AI is increasing, we recognize that it is also our responsibility to act. We believe it is necessary to ensure that AI systems are designed responsibly. »

Leave a Comment