Bias resulting from the use of artificial intelligence violates New Jersey's Law Against Discrimination (LAD), according to new state attorney general guidance.
New guidance from the Equal Employment Opportunity Commission (EEOC) cautions employers that directing employees to use wearable devices in the workplace may violate the Americans with Disabilities Act (ADA) and other federal antidiscrimination laws.
New technologies used to track and analyze workers and job applicants often will be governed by the Fair Credit Reporting Act (FCRA), according to the Consumer Financial Protection Bureau (CFPB).
Starting in 2026, Colorado businesses that use artificial intelligence (AI) in certain employment decisions will be required to take reasonable care to protect employees and job applicants from "algorithmic discrimination."
The US Department of Labor (DOL) has released a set of principles it says employers and developers should use when applying AI in the workplace, but it remains unclear whether these principles will carry regulatory force.
President Biden recently issued an Executive Order calling for a coordinated approach to regulating the use of artificial intelligence (AI), including in employment.
The EEOC has issued updated guidance about how the ADA applies to job applicants and employees with visual disabilities, including addressing issues related to the use of AI decision-making tools.
Employers that use AI and algorithmic decision-making tools must be careful that the technology does not systematically disadvantage people based on their race, color, religion, sex or national origin, according to a new guidance document from the Equal Employment Opportunity Commission (EEOC).