New Jersey Guidance on AI Discrimination and Implications for Employers

On January 9, New Jersey Attorney General Matthew J. Platkin and the Division on Civil Rights issued guidance stating that the New Jersey Law Against Discrimination (LAD) applies to AI-powered decision-making in hiring and beyond. Thus, AI-driven bias constitutes illegal discrimination.

New Jersey also launched a Civil Rights Innovation Lab to monitor AI compliance, enforce violations, and educate businesses on AI risks. New Jersey employers using AI-driven tools must now proactively ensure these systems don’t create discriminatory outcomes. 

AI Bias is Illegal under the LAD

 The LAD’s broad purpose is to eliminate discrimination, and it doesn’t distinguish between the mechanisms used to discriminate. “Automated decision-making tool” refers to any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process. The guidance makes clear that under the LAD, discrimination is prohibited regardless of whether it is caused by automated decision-making tools or human actions. If an AI system results in biased outcomes, the employer will be held responsible.

The guidance emphasizes that employers cannot escape liability by outsourcing AI hiring, screening, or evaluation tools. Thus, employers cannot point to third-party vendors if a bad outcome occurs and a lawsuit follows. If an AI tool used by an employer leads to disparate impact or direct discrimination, the guidance says that the employer is still legally responsible.  

Civil Rights Innovation Lab

The guidance announced the creation of New Jersey’s Civil Rights Innovation Lab. This new government agency will:

  • Develop AI tools to detect discrimination in hiring, housing, and credit.
  • Enhance enforcement of AI-related discrimination complaints.
  • Offer compliance training to businesses on AI risk management.

Evolving Regulatory Landscape

New Jersey’s guidance is one of many states issuing increased regulation of AI in the context of employment decisions.

  • New York City’s Local Law 144 was the nation’s first law to create obligations for employers when AI is used for employment purposes – including obligatory bias audits.
  • Colorado was the first state to pass a law requiring AI bias prevention measures.
  • Illinois became the second state to pass AI workplace legislation that will require employers to provide notice to applicants and workers if they use AI for hiring, discipline, discharge, or other workplace-related purposes.
  • Several other states– including Texas and Connecticut – have pending AI bias legislation for 2025.

Next Steps for Employers

Employers can take steps to identify and eliminate bias in its automated decision-making tools, such as:

  • implementing quality control measures for any data used in designing, training, and deploying the tool;
  • conducting impact assessments;
  • having pre-and post-deployment bias audits performed by independent parties;
  • providing notice of their use of an automated decision making tool;
  • involving people impacted by their use of a tool in the development of the tool; and
  • purposely attacking the tools to search for flaws;
  • train HR teams on AI compliance; and
  • monitor enforcement trends in anticipation of regulatory shifts.

If you have any questions regarding automated decision-making tools in the workplace, please feel free to reach out to any member of Forework.