Microsoft plans to eliminate facial analytics tools in push for ‘responsible AI’

For years, activists and academics have expressed concern that facial analysis software that claims to identify a person’s age, gender and emotional state could be used. prejudicedunreliable or invasive – and should not be sold.

Microsoft acknowledged some of those criticisms and said on Tuesday it plans to remove those features from its software artificial intelligence service for detecting, analyzing and recognizing faces. They will no longer be available to new users this week and will be phased out for existing users within the year.

The changes are part of Microsoft’s move to tighten controls on its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that outlines the requirements for AI systems to ensure they will not have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions to the problems they are designed to solve” and “comparable quality of service for identified demographics, including marginalized groups.”

Before being released, technologies that would be used to make important decisions about a person’s access to work, education, health care, financial services or any opportunity at life are subject to an assessment by a team led by Natasha Crampton, the AI ​​responsible. official at Microsoft .

There were heightened concerns at Microsoft over the emotion recognition tool, which labeled a person’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness, or surprise.

“There is a tremendous amount of cultural, geographic and individual variation in the way we express ourselves,” said Ms Crampton. That led to reliability concerns, along with the larger questions of whether “facial expression is a reliable indicator of your internal emotional state,” she said.

The age and gender analysis tools that are being eliminated — along with other tools to detect facial features such as hair and smile — could be useful for interpreting visual images for, say, blind or partially sighted people, but the company decided it was problematic to profiling tools that are generally available to the public, said Ms. Crampton.

In particular, she added, the system’s so-called gender classification was binary, “and that’s not consistent with our values.”

Microsoft will also put new checks on its facial recognition feature, which can be used to perform identity checks or search for a specific person. Uber, for example uses the software in his app to check if a driver’s face matches the ID registered to that driver’s account. Software developers who want to use Microsoft’s facial recognition tool must request access and explain how they want to use it.

Users must also submit an application and explain how they will use other potentially abusive AI systems, such as: Custom Neural Voice† The service can generate a human voiceprint from a sample of someone’s speech, so that authors can, for example, create synthetic versions of their voice to read their audiobooks in languages ​​they don’t speak.

Because of the potential abuse of the tool — to give the impression that people have said things they haven’t said — speakers must go through a series of steps to confirm that their voices are allowed, and the recordings contain watermarks provided by Microsoft. can be detected.

“We are taking concrete steps to live up to our AI principles,” said Ms. Crampton, who spent 11 years as a lawyer at Microsoft and joined the AI ​​ethics group in 2018. “It’s going to be a huge journey.”

Microsoft, like other tech companies, has had problems with its artificially intelligent products. In 2016, it released a chatbot on Twitter called Tay, which was designed to learn “conversational understanding” from the users it interacted with. The bot started to squirt quickly racist and offensive tweetsand Microsoft had to remove it.

In 2020, researchers found that speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon worked less well for black people† Microsoft’s system was the best of the bunch, misidentifying 15 percent of words for white people, compared to 27 percent for black people.

The company had collected various speech data to train its AI system, but hadn’t understood how diverse language could be. So it hired a sociolinguistic expert from the University of Washington to explain the language varieties Microsoft needed to know. It went beyond demographics and regional variation in how people speak in formal and informal situations.

“It’s actually a bit misleading to think of race as a determinant of how someone speaks,” said Ms Crampton. “What we learned in consultation with the expert is that there is in fact a huge range of factors influencing language variety.”

Ms Crampton said the journey to resolve that disparity between speech and text had helped inform the guidelines set forth in the company’s new standards.

“This is a critical normative period for AI,” she said, noting: Europe’s proposed regulations set rules and limits on the use of artificial intelligence. “We hope with our standard to contribute to the clear, necessary discussion that needs to be had about the standards that technology companies must adhere to.”

A lively debate about the potential harm of AI has been going on in the technology community for years, fueled by errors and mistakes that have real consequences on people’s lives, such as algorithms that determine whether people receive benefits or not. Dutch Tax Authority wrongly removed childcare allowance of needy families when flawed algorithm people with dual citizenship.

Automated face recognition and analysis software is particularly controversial. Last year, Facebook shut down the decade-old system for identifying people in photos. The company’s vice president of artificial intelligence cited the “many concerns about the place of facial recognition technology in society.”

Several black men have been wrongly arrested after flawed facial recognition matches. And in 2020, along with the Black Lives Matter protests following the George Floyd police murder in Minneapolis, Amazon and Microsoft imposed moratoriums on the use of their facial recognition products by police in the United States, saying: clearer laws about its use were required.

From that moment on, Washington and Massachusetts have passed a regulation requiring, among other things, judicial oversight of the use of facial recognition tools by the police.

Ms. Crampton said Microsoft had considered making its software available to law enforcement in states with laws on the books, but had decided against doing so for now. She said that could change as the legal landscape changed.

Arvind Narayanan, computer science professor at Princeton and prominent AI expertsaid companies may be stepping back from technologies that analyze the face because they were “more visceral, as opposed to several other types of AI that may be questionable, but we don’t necessarily feel in our bones.”

Companies may also realize that, at least for now, some of these systems are not as commercially valuable, he said. Microsoft couldn’t say how many users it had for the facial analytics features it’s getting rid of. Mr Narayanan predicted that companies would be less likely to forgo other invasive technologies such as targeted advertising, where people are profiled to choose the best ads to show them, because they are a “cash cow.”

Leave a Comment

Your email address will not be published.