AI in Recruiting: DEI Solution or Discrimination Trap?

Author: Emily Scace, Brightmine Legal Editor

November 7, 2023

human-resources-social-network.jpg

Recently, the Equal Employment Opportunity Commission (EEOC) made headlines when it announced a settlement with the tutoring services provider iTutorGroup to resolve allegations of systemic discrimination against older applicants.

While age discrimination cases are nothing unusual for the EEOC, the iTutorGroup case had a new wrinkle: it involved algorithmic discrimination. According to the EEOC, iTutorGroup had programmed its online application software to automatically reject female applicants over 55 and male applicants over 60, resulting in more than 200 qualified applicants being eliminated from consideration. Under the terms of the settlement, iTutorGroup agreed to pay $365,000 to the automatically rejected applicants and revamp its training and hiring practices to avoid discrimination in the future.

The iTutorGroup settlement should serve as a cautionary tale for employers, as artificial intelligence (AI) and machine learning are becoming increasingly embedded in recruiting and hiring. With myriad software and technology tools promising to streamline employers' processes and help them more effectively identify the best candidates, it is important for organizations to ask questions and fully vet any tools they consider adopting. While AI may offer benefits, employers must ensure that any tools they use do not discriminate - either deliberately or accidentally - based on protected characteristics like race, age, sex and disability. Neglecting to perform this due diligence can result in violations of federal, state and local antidiscrimination laws.

According to guidance from the EEOC, if an AI-based or algorithmic tool adversely impacts individuals of a particular race, color, religion, sex (which includes pregnancy, sexual orientation and gender identity), or national origin - or a certain combination of these characteristics - an employer's use of the tool will likely violate federal law unless the employer can show that it is job-related and consistent with business necessity. This is generally true even if the tool has been designed or administered by a third party, such as a software vendor.

While the technology is evolving quickly, with new offerings every day, examples of algorithmic or AI-based tools and software include:

  • Resume scanners that prioritize applications using certain keywords;
  • Virtual assistants that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • Video interviewing software that evaluates candidates based on facial expressions and speech patterns; and
  • Testing software that provides job fit scores based on an applicant's personality, aptitude, cognitive skills or perceived cultural fit.

And beyond the EEOC, President Biden has signaled that managing both the risks and promises of AI will be a growing focus area. A recent Executive Order (EO) calls for a coordinated approach to regulating the use of AI and identifies a commitment to supporting workers as a key part of the responsible development and use of AI.

At the local level, New York City has opted to tackle the issue of AI discrimination head-on. Earlier this year, a first-of-its-kind law took effect that addresses automated employment decision tools (AEDTs) - an umbrella term that includes AI, machine learning, data analytics and statistical modeling when used to substantially assist or replace discretionary decision-making for employment decisions affecting individual people.

An employer covered by the New York City law may only use an AEDT if the tool has been the subject of a bias audit conducted by an independent auditor within the past year. The rules for bias audits are extensive, specifying particular calculations and analyses to assess whether an AEDT disproportionately harms people along racial, ethnic or gender lines. Employers must share the results of these bias audits publicly by posting a summary online. As the use of AI and machine learning tools for recruiting and hiring continues to grow, New York City's AEDT law may serve as a model for other jurisdictions looking to regulate this emerging area.

Balancing Risks and Benefits

Given the rapidly changing legal landscape surrounding AI, some employers may conclude that the risks are not worth any purported benefits. But that conclusion, too, may be premature.

Although AI tools can perpetuate bias and discrimination if implemented without sufficient vetting or oversight, many aim to do the opposite by helping employers to broaden their candidate pools, identify the job-related factors that are truly related to success in a role, correct pay inequity and more.

To mitigate the risk, the EEOC recommends that employers looking to adopt AI ask software vendors if they have evaluated whether their offerings disproportionately harm any protected groups. But in having these conversations, employers should not simply rely on assurances from vendors. Rather, employers should conduct their own evaluations of any tool they wish to implement - both before implementation and throughout its use - to track its impact in the specific context of their organization. This may mean upskilling or hiring staff to ensure sufficient data literacy and fluency to ask the right questions and perform the right analyses.

While AI will never be a full replacement for human judgment and discretion, thoughtfully implemented technology solutions can augment that judgment by providing relevant data to support a decision and flagging potential issues to enable employers to counteract implicit bias and take a proactive approach to preventing discrimination risks.