When Algorithms Fail: Exposing Age Discrimination Through AI Hiring Tools

Author: Melissa A. Silver, Brightmine Principal Legal Editor

artificial-intelligence-digital-hand.jpg

A recent court ruling in the case of Mobley v. Workday is making waves across HR when it comes to age discrimination and tools used to screen job applicants.  A federal district court in California granted a preliminary certification for a nationwide class action lawsuit regarding plaintiff Derek Mobley's claim under the Age Discrimination in Employment Act (ADEA). The action includes all individuals aged 40 and over who applied for jobs through Workday, Inc.'s application platform from September 24, 2020, to the present and were denied employment recommendations.

Workday offers human resource management services, including applicant screening across various industries. Their platform claims to reduce hiring time by using artificial intelligence systems (AI) to move candidates forward in the recruiting process. Mobley argues that these AI systems are designed in a manner that reflects employer biases and relies on biased training data, which often prevents qualified candidates from advancing in the hiring process unless they pass Workday's screening algorithms.

Mobley claims that since 2017 he applied to over 100 positions with companies that use Workday's features but was rejected each time. In support of his motion, Mobley submitted the declarations of four other plaintiffs with similar experiences of automated rejections despite meeting qualifications.

The court held that whether Workday's system had a disparate impact on applicants over 40 is a common issue that can be addressed collectively.

While this case is still in its infancy, and it is unknown what the ultimate result will be, it is a gut check for HR to ensure that they are choosing and using AI tools wisely when looking for help to streamline their workflows.

Although the Mobley case may be making HR unsettled about using AI screening applications, this is actually not the first time an age discrimination case is making the news when it comes to artificial intelligence. In 2023, the Equal Employment Opportunity Commission (EEOC) settled with iTutorGroup over allegations of systemic age discrimination. That case involved algorithmic discrimination, with allegations that the company's application software automatically rejected female applicants over 55 and male applicants over 60, eliminating over 200 qualified candidates. As part of the settlement, iTutorGroup was ordered to pay $365,000 to those affected and revise its hiring practices.

Key Takeaways

As AI technologies continue to advance and organizations increasingly adopt these innovations, employees - particularly those in HR functions - are actively seeking ways to implement AI solutions that streamline repetitive tasks and enhance productivity. The talent acquisition process exemplifies this trend. Companies face many recruitment challenges, such as the lengthy hiring timeline and difficulty filtering out unqualified applicants from a large pool of candidates. Therefore, it is not surprising that HR departments are turning to AI tools to facilitate parts of the selection process.

However, with every benefit comes inherent risks. The Mobley case underscores the necessity for organizations to establish quality and safety standards when integrating AI into their operations, especially when collaborating with third-party vendors. To mitigate potential issues, organizations should take several proactive steps:

  • When utilizing a third-party vendor, thoroughly review their policies, agreements and product documentation to gain insight into the quality and safety standards they uphold.
  • Set stringent quality standards for AI applications - including those provided by third-party vendors - that address essential aspects such as data quality, security measures, privacy protocols, safeguards against AI errors and strategies for preventing bias and discrimination.
  • Conduct regular audits of new AI tools to ensure compliance with established criteria and legal requirements concerning quality assurance, security integrity, bias mitigation, and discrimination prevention. Notably, New York City already mandates bias audits for employers using automated employment decision tools. Additionally, effective February 1, 2026, Colorado will require businesses to take reasonable care in protecting residents from algorithmic discrimination based on any protected characteristic under federal or state law.
  • Implement a structured auditing schedule for the ongoing evaluation of deployed AI systems to protect against bias and discrimination in employment decisions.
  • Ensure that all vetting and auditing practices associated with AI comply with relevant laws and regulations and continue to track legal developments.

By leveraging AI effectively within their operations, organizations can significantly boost productivity levels in their hiring process while enhancing performance and fostering innovation. Nevertheless, it is crucial that these advancements do not inadvertently lead to bias or discrimination against employees or job candidates. Therefore, companies must commit themselves to employing AI responsibly and continue to have human oversight of AI tools in order to avoid potential negative consequences.