From facial analytic tools that assess employees’ attentiveness in meetings, to bots that screen applicants’ technology, to employee scheduling technologies, artificial intelligence is quickly infiltrating all aspects of business operations. It is only natural that employers have begun to wonder if and how to AI can maximize their output and improve the bottom line.
In the employment realm, the EEOC recently issued a “technical assistance” guidance document that explains the application of Title VII of the Civil Rights Act to automated systems that incorporate AI into a range of HR functions. In the guidance, the EEOC warns that neutral tests or selection procedures, including algorithmic decision-making tools, that have a disparate impact on the basis of race, color, religion, sex or national origin must be job-related and consistent with business necessity; otherwise they are prohibited. “Disparate impact” is, effectively, unintentional discrimination; a discriminatory result is reached without any intention to discriminate. However, the guidance further states, “if an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.”
In addition to the federal EEOC guidelines, employers with employees in some states should be mindful of their state-specific laws for the use of AI in employment. For example, in 2020, Illinois enacted the Artificial Intelligence Video Interview Act, which requires covered employers to: (1) obtain consent from job applicants before using AI, after explaining how the AI works and its evaluation standards; and (2) ensure proper control of video recordings and deletion upon request.
The same year, Maryland passed its AI-employment law, called H.B. 1202, which prohibits employers from using facial recognition technology during an interview for employment to create a facial template without consent (the law defines a proper consent).
On July 5, 2023, New York City’s Department of Consumer and Worker Protection will begin enforcing Local Law 144, which regulates the use of AI in “employment decisions.” Before employers or HR departments use automated employment decision tools to assess New York City residents, they must generally: (1) conduct a bias audit; (2) notify candidates or employees residing in the city about the use of such tools; and (3) notify affected persons that they may request an accommodation or alternative process. Violations of the law are subject to civil penalties, which may accrue daily and separately for each violation.
Takeaways: Initially, due to the remote nature of today’s workforce, employers must be mindful that they may be covered by the AI laws of the state where their employees work (even a single remote employee could trigger that state’s AI laws for the employer, in relation to that employee). Secondly, employers should stay on top of these laws and ensure they are applying them to whatever AI technology they are using. Now that we have entered the AI age, the technology will evolve rapidly over the next 10 years, and the law will evolve slightly behind. Thus, it will be important to just monitor those legal developments if utilizing AI for employment purposes.