Avoiding Compliance Pitfalls in the Evolving AI Legal Landscape

Author: Brightmine Editorial Team

ai-landscape.jpg

As artificial intelligence becomes embedded in hiring, performance management and other HR processes, employers face increasing legal obligations, especially in the absence of a unified federal standard. With no comprehensive federal legislation in place, states and localities continue to fill the regulatory gap - leaving employers, especially multistate employers, to navigate a complex patchwork of obligations.

While the current administration has signaled interest in developing national AI standards that could eventually preempt state and local laws--and in effect the patchwork of compliance requirements -- until there is a law preempting these requirements, employers must continue tracking and complying with any applicable jurisdiction-specific AI laws.

States and Localities Moving to Curb AI Bias in Employment Decisions

New York City was first to impose comprehensive requirements, mandating in 2023 that employers using an "automated employment decision tool" (AEDT) conduct an annual bias audit and provide specific disclosures to candidates and employees subject to the tool.

Several states have followed with their own frameworks aimed at preventing algorithmic discrimination, including:

  • California amended its California Fair Employment and Housing Act (FEHA) regulations (effective October 1, 2025) to clarify that use of automated decision systems that result in discrimination based on a protected characteristic may violate state law.
  • Illinois (effective January 1, 2026) bars employers from using AI that has the effect of subjecting individuals to discrimination based on a protected class in recruitment, hiring, promotion and other employment decisions.
  • Texas (effective January 1, 2026) prohibits employers from developing or deploying an AI system with intent to discriminate against protected classes. 

Looking Ahead

Colorado's AI Law: A Rapidly Changing Target

Colorado's law, which requires organizations to mitigate the risk of algorithmic discrimination when using AI to make "consequential decisions" -- including employment decisions - is scheduled to take effect June 30, 2026. However, Colorado lawmakers and regulators have signaled that the law is likely to undergo substantial revisions before that date. Colorado employers should track developments closely because compliance requirements may change.

California's CPPA ADMT Regulations: Effective January 1, 2027

Beginning January 1, 2027, employers covered by the California Consumer Privacy Act (CCPA) that use automated decision-making technology (ADMT) to make significant employment-related decisions will face expanded compliance obligations. Employers must: 

  • Conduct a risk assessment before using ADMT for any significant decision; 
  • Provide a detailed pre-use notice to California residents; and  
  • Submit any risk assessments to the CCPA Agency by April 1 of the following year.  

With less than a year until the effective date California employers should begin preparing now.

As more states and localities propose AI-specific legislation, the compliance landscape is expected to become more complex before it becomes simpler.

Mitigating the Risk

Algorithmic discrimination is not a new problem, rather, it is just a new flavor of an old problem: employers making decisions regarding an applicant or employee based on protected characteristics, such as race, religion, sex, national origin, age, instead of a legitimate nondiscriminatory business reason. AI introduces new pathways for bias, but when implemented responsibly, it can also help mitigate it.

Sidestepping AI tools altogether may feel like the safest move, but that is not necessarily practical or beneficial. AI tools offer many benefits and can even help mitigate discrimination risk when used correctly. Human judgment itself is a frequent source of bias that can lead to unlawful discrimination, leveraging AI to augment or supplement human decision-making can result in fairer, more objective decisions and outcomes.

To responsibly select and implement AI tools in the employment process, follow these key steps:

1. Ask Critical Questions of AI Vendors

Since AI tools rely on algorithms and models, users may be unable to examine them directly. However, employers must not accept a tool provider's marketing claims without performing due diligence. Key questions include:

  • Has the tool undergone a bias audit and what were the results?
  • What date security and privacy safeguards are in place?
  • What inputs, criteria or factors does the model rely on to perform its functions, e.g., evaluate applicants or employees?
  • How is the model tested, validated and updated over time.

Courts are unlikely to accept an employer's defense of "the AI made me do it". Employers remain responsible for outcomes.

2. Track Key Metrics Before and After Implementing AI 

Establish a thorough evaluation process to assess AI tools before deploying them within your organization mitigate legal risks, particularly exposure to discrimination claims. For example:

  • Compare demographics (from voluntary self-disclosure) of candidates selected for interviews before and after implementing an AI screening tool.
  • Watch for meaningful shifts in representation among groups historically impacted by discrimination - for example, fewer women or people of color receive interview invitations after implementing the tool, or if the opposite occurs.

One data point does not establish causation, but trends should trigger further examination.

3. Use Extra Caution with Video Technology and Facial Recognition 

Legal restrictions on use of video technology and facial recognition should be on an employer's radar before using these tools. Some states already regulate these tools:

  • Illinois requires employers using AI to evaluate applicant-submitted video interviews to take certain compliance steps to guard against discrimination and protect candidate privacy.
  • Maryland prohibits the use of facial recognition during interviews without the applicant's consent.

Given growing concern over generative AI and deepfake videos and the sensitive nature of biometric data, employers should take extra precautions with these types of tools.  Therefore, tread carefully before implementing these technologies in your recruiting and hiring practices.

4. Use AI Alongside Human Judgment, Rather Than a Substitute For It 

Both humans and algorithms can be influenced by bias, but combining human intelligence with artificial intelligence offers a better chance to reign in and counteract the blind spots of each. For instance, it can be helpful to use an AI tool to review a human-written job posting for biased language or flag potentially problematic patterns in performance evaluations.

AI-related legislation and guidance are developing at a rapid pace. HR teams should maintain active oversight of:

  • Federal, state and local developments involving AI impact assessments, bias audits and employee/applicant disclosures.
  • Regularly evaluate internal practices and policies, including evaluation and documentation practices, to ensure ongoing compliance.

Multistate employers should pay particular attention, as compliance obligations may differ significantly by location.