Microsoft is scrapping some bad AI facial recognition tools

As an outspoken advocate for properly regulating facial recognition technology, Microsoft
announced that it would get rid of his AI tools in this space† AI is still the most contentious part of technology and is becoming more common as companies look to integrate it into their platforms. Now Microsoft is finally ending its role in the potential for abuse that facial recognition technology has, which could lead to incidents of racial profiling.

The breakdown you need to know:

After a two-year review and a 27-page document, the tech giant wants tighter controls over its artificial intelligence products. CultureBanx reported that in the past, Microsoft has asked governments around the world to regulate the use of facial recognition technology. The software giant wants to ensure that the technology that has higher error rates for African Americans does not invade personal privacy or become a tool for discrimination or surveillance.

There are some companies that rely heavily on Microsoft’s facial recognition technology. Uber . for example
uses the software in its app to check if a driver’s face matches the ID in the file for that same driver. This seems like a sensible way to use facial recognition tools.

AI Atrocities

A lot of damage can be caused by this kind of technology. MIT research shows that commercial artificial intelligence systems tend to have higher error rates for women and black people. Some facial recognition systems would confuse fair-skinned men only 0.8% of the time and would 34.7% error rate for dark-skinned women

In 2019, Microsoft quietly deleted its MS Celeb database, which contains more than 10 million images. The collected images included journalists, artists, musicians, activists, policy makers, writers and researchers. The deletion came after the tech company called on US politicians to better regulate recognition systems.

In addition, in Microsoft’s 2018 SEC annual report, it noted that “AI algorithms may be flawed. Data sets may be inadequate or contain biased information. If we enable or offer AI solutions that are controversial for their impact on human rights, privacy, , employment or other social issues, we could suffer brand or reputation damage.”

What’s next:

Remember that artificial intelligence systems inherently learn what they are “taught”. The use of facial recognition technology has a disparate impact on people of color, disenfranchising a group already facing inequality. It says a lot about the malicious nature built into AI for a company like Microsoft to throw the technology in the ring. The real question is whether the rest of the industry will do the same.

Leave a Comment

Your email address will not be published.