Microsoft is reviewing its ethics policy on artificial intelligence and will no longer allow companies to use their technology to do things like infer emotion, gender or age using facial recognition technology, the company said.
As part of its new “responsible AI standard,” Microsoft says it plans to “keep people and their goals at the center of system design decisions.” The high-level principles will lead to real changes in the field, the company says, with some features being tweaked and others being discontinued.
For example, Microsoft’s Azure Face service is a facial recognition tool used by companies like Uber as part of their identity verification processes. Now any company that wants to use the service’s facial recognition features must actively apply for use, including those who have already built it into their products, to prove they meet Microsoft’s ethical standards for AI and that the functions benefit the end user and society.
Even the companies that gain access will no longer be able to use some of Azure Face’s more controversial features, Microsoft says, and the company will discontinue facial analysis technology that claims to infer emotional states and attributes such as gender or age.
“We worked with internal and external researchers to understand the limitations and potential benefits of this technology and make the tradeoffs,” said Sarah Bird, product manager at Microsoft. “Particularly in the case of emotion classification, these efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions’ and the inability to generalize the association between facial expression and emotional state across use cases.”
Sign up for First Edition, our free daily newsletter – every weekday morning at 7am BST
Microsoft isn’t scrapping emotion recognition altogether — the company will still use it internally for accessibility tools like Seeing AI, which attempt to verbally describe the world to users with vision problems.
Likewise, the company has restricted the use of its custom neural voice technology, which allows the creation of synthetic voices that sound nearly identical to the original source. “It’s … easy to imagine how it could be used to inappropriately impersonate speakers and mislead listeners,” said Natasha Crampton, the company’s responsible AI officer.
Earlier this year, Microsoft began watermarking its synthetic voices, incorporating small, inaudible fluctuations in the output, allowing the company to tell when a recording had been made using its technology. “With the advancement of neural TTS technology, making synthetic speech indistinguishable from human voices, there is a risk of harmful deepfakes,” said Microsoft’s Qinying Liao.