AI-based recruiting tools could violate the Americans with Disabilities Act

Daming says several state laws regulate the collection and use of biometric data, and a few states specifically regulate the use of AI in recruiting tools. “In addition to the actual regulations, people are increasingly concerned about how their data is being captured and used by companies,” Daming says. “They may also be concerned that an algorithm rather than a human is evaluating their potential for a function. There’s an eerie factor in that.”

The guideline is part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, an agency-wide initiative launched last year to ensure that the use of software, including AI, machine learning, and other emerging technologies used in hiring and other employment decisions, is consistent with the federal civil rights laws which the EEOC enforces. The goal of the initiative is to help employers, employees, job applicants and suppliers ensure that these technologies are used fairly and in accordance with federal equal employment opportunity laws.

The Federal Trade Commission (FTC), which monitors companies for unfair or deceptive business practices, has also recently put AI in the spotlight, Daming says. In April 2021, the agency released informal guidelines advising companies to be aware of potential biases when using algorithmic decision-making software. In December 2021, the FTC announced its intention to adopt rules to ensure that algorithmic decision-making does not lead to unlawful discrimination.

“The best thing employers can do is be transparent with applicants and employees about how the technology works,” Daming says. “That gives individuals control over whether they want their data evaluated and whether they may need accommodation. Of course, that requires employers to fully understand the technology and how it can impact job applicants.”

More than a dozen of the world’s largest employers agree that bias is a major problem when it comes to recruiting, prospecting and recruiting algorithms. The Data & Trust Alliance was established in December 2021 to focus on responsible data and AI practices. Members include Walmart, Meta (formerly known as Facebook), IBM, American Express, CVS Health, General Motors, Humana, Mastercard, Nielsen, Nike, Under Armour, Deloitte, and Diveplane. Together, they employ more than 3.7 million people, according to a press release.

Leave a Comment

Your email address will not be published.