The EU Artificial Intelligence Act: What US Employers Need to Know

Author: Robert S. Teachout, Brightmine Legal Editor

artificial-intelligence-digital-hand.jpg

In March 2024, the European Union (EU) adopted the Artificial Intelligence (AI) Act. The Act, among other things, will regulate how AI can be used in employment by companies operating in the EU.

It is not the first legislation addressing the use of AI in the employment landscape and does not specifically affect companies that operate exclusively in the US. However, US employers that sell or operate AI services in the European market should recognize that not complying with the Act's requirements can result in heavy penalties.

Beyond this, even US employers across other industries should understand the Act's implications since it may set a precedent for similar legislation in the US. Some states already are going down that path. Colorado passed a law that will require employers that use AI in certain employment decisions to take reasonable care to protect employees and job applicants from "algorithmic discrimination." And Illinois and Maryland regulate how AI can be used in applicant video interviews.

But the EU's AI Act is the world's first comprehensive, horizontal legal framework for AI - that is, one that applies to all applications of AI across all sectors and standards. The impact of the Act will be felt globally.

Key Provisions

The EU's AI Act assigns AI systems to one of four tiers based on a system's level of risk:

  • Unacceptable;
  • High;
  • Limited; or
  • Minimal.

Assignment to a tier is generally correlated to the sensitivity of the data involved and the application and use case of the AI system. Although the Act does not address the use of personal data by AI, it does state that current EU law (i.e., General Data Protection Regulations (GDPR)) applies to the collection, use and protection of personal data, privacy and confidentiality in AI-based applications and technology. In general, the more highly sensitive data an AI system handles, the higher risk the system will be rated.

All AI practices that pose unacceptable risk to the safety, livelihoods and rights of people are strictly prohibited. Such practices include:

  • Using manipulative, deceptive and/or subliminal techniques to influence a person;
  • Exploiting a person's vulnerabilities due to their age, disability or social/economic situation; or
  • Using biometric data to categorize individuals based on protected characteristics.

High-risk AI systems include those used in critical infrastructure; employment and management of workers; educational and vocational training; and law enforcement and the administration of justice. All remote biometric identification systems are considered high risk.

The concern with limited-risk AI is generally related to the potential lack of transparency in its usage. The Act imposes specific requirements to ensure that necessary information is provided prior to use, such as informing a person when they are interacting with a chatbot or identifying when AI-generated content is provided. Minimal-risk AI (such as such as AI-enabled video games or spam filters commonly used today) may be freely used.

Impact on US Organizations

Just as the GDPR set the bar for data protection and privacy when they were implemented in 2018, the EU AI Act may become the global standard for determining how AI can be used in a positive and ethical way. And, like the GDPR, the Act will have a significant impact on organizations in the US and around the world.

The AI Act's regulatory framework will apply to any providers and developers of AI systems (including free-to-use systems) marketed or used within the EU. It does not matter whether those providers or developers are based or operating in the EU or in another country, they must comply with strict requirements to mitigate risks and provide information and monitoring. Penalties for not complying can range from €7.5 million or 1.5 percent of global revenue up to €35 million or 7 percent of revenue.

Take Initiative, Prepare Now

The best approach for US employers to take regarding AI regulation is to be proactive and to embrace a responsible AI mindset. Policies and practices should be considered through an ethical lens; instead of focusing on what your organization's AI can do, consider how it should be used.

The first assessment an organization needs to make is whether it is subject to the EU regulations. The Act applies to providers, deployers, distributors, importers, product manufacturers and authorized representatives. The distinction between the different roles is important because each role will be subject to different requirements. The most stringent requirements apply to providers, but many employers also could be considered "deployers" due to using an AI-enabled HR management system and therefore subject to regulation.

Adopting and communicating a formal AI policy is an important step for organizations to take. The policy should implement robust safeguards that address the key risks set forth in the Act and be regularly reviewed and updated as necessary. The policy should clearly explain the employer's commitment to ethical and positive AI use; establish boundaries of permissible use; encourage transparency, including prohibiting retaliation for asking questions or reporting violations; and ensure that data is secure and private.

Organizations also need to:

  • Set up guardrails for AI development, such as having a risk mitigation team and conducting robust reviews and use case testing.
  • Provide training for how to use the AI system and its ethical application.
  • Communicate about the employer's use of AI to all stakeholders, including employees, customers and the public. Commit to transparency and accountability from the beginning.

What the Future Holds for AI Regulations

As the use of AI technology continues to increase exponentially, legislators will continue to address the use of such technology in response to concerns about potential harm from the use of AI, and calls for greater oversight of AI systems by the AI ethics movement.

As new laws and regulations are introduced and adopted, organizations that stay informed about pending and new laws and regulations and adopt best practices for ethical AI use will maintain a competitive advantage in their market.