Listen to this post

As we previously reported, the Equal Employment Opportunity Commission (“EEOC”) has had on its radar potential harms that may result from the use of artificial intelligence technology (“AI”) in the workplace. While some jurisdictions have already enacted requirements and restrictions on the use of AI decision making tools in employee selection methods,[1] on May 18, 2023, the EEOC updated its guidance on the use of AI for employment-related decisions, issuing a technical assistance document titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (“Updated Guidance”). The Updated Guidance comes almost a year after the EEOC published related guidance explaining how employers’ use of algorithmic decision-making tools may violate the Americans with Disabilities Act (“ADA”). The Updated Guidance instead focuses on how the use of AI may implicate Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, sex, and national origin. Particularly, the EEOC focuses on the disparate impact AI may have on “selection procedures” for hiring, firing, and promoting.

A Background of Title VII

As a brief background, Title VII was enacted to help protect applicants and employees from discrimination based on race, color, religion, sex, and national origin. Title VII is also the act that created the EEOC. In its almost 60 years of life, Title VII has been interpreted to include protection against sexual harassment and discrimination based on pregnancy, sexual orientation, gender identity, disability, age, and genetic information. It prohibits discriminatory actions by employers in making employment-related decisions including, for example, with respect to recruiting, hiring, monitoring, promoting, transferring, and terminating employees. There are two main categories of discrimination under Title VII: (1) disparate treatment, which refers to the intentional discriminatory decisions of an employer, and (2) disparate impact, which refers to the unintentional discrimination that occurs as a result of an employer’s patterns and practices. As stated above, the EEOC’s Updated Guidance focuses on the effects AI may have on the latter.

The EEOC’s Updated Guidance on the Use of AI Decision Making Tools

The Updated Guidance provides important information to help employers understand how the use of AI in “selection procedures” may expose them to liability under Title VII, as well as some practical tips for limiting liability. 

First, as an initial matter, it is important for employers to understand whether they are using AI decision making tools in their “selection procedures” as defined under Title VII. The EEOC clarifies that a “selection procedure” is “any ‘measure, combination of measures, or procedure,’ that is used as a basis for an employment decision.” In other words, the EEOC considers a selection procedure to encompass any and all decisions made by employers that affect an employee’s position in the company, from the employee’s application to their separation. 

Examples of AI-based decision making tools that employers may be using in selection procedures include:

  • resume scanners that prioritize applications using certain keywords;
  • monitoring software that rates employees on the basis of their keystrokes or other factors;
  • “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

Second, the EEOC explains how employers can and should assess their AI-driven selection procedures for adverse impact. If an AI-driven method causes members of a particular group to be selected at a “substantially” lower “selection rate” when compared to individuals of another group, the employer’s use of that tool would violate Title VII. A “selection rate” is the proportion of applicants or candidates who are actually hired, promoted, terminated, or otherwise selected. It is calculated by taking the total number of applicants or candidates of a particular group who were selected and dividing that number by the total number of applicants or candidates in that group as a whole. As a general rule of thumb, a particular group’s selection rate is “substantially” lower if it is less than 80 percent or four-fifths of the most favored group’s selection rate. The EEOC aptly refers to this as the “four-fifths rule.” The EEOC warned, however, that compliance with the “four-fifths rule” does not guarantee a compliant selection method. “Courts have agreed that use of the four-fifths rule is not always appropriate, especially where it is not a reasonable substitute for a test of statistical significance.”[2]

Third, the EEOC reiterated that yes, just as an employer may be liable for ADA violations for its use of AI decision-making tools that are designed or administered by a third party, the same is true for violations of Title VII. Reliance on a software vendor’s assurances will not excuse an employer from liability if the software results in a “substantially” lower rate of selection for certain groups. 

Finally, the Updated Guidance makes clear that employers should also evaluate their use of AI tools with respect to the other stages of the Title VII disparate impact analysis, including “whether a tool is a valid measure of important job-related traits or characteristics.”

Practical Tips for Employers

  1. require employees to seek approval before using algorithmic decision-making tools so that you can do diligence on the tool. We previously explained why employee policies should be updated to address use of AI tools in this article;
  2. periodically conduct audits to determine whether the tools you are using result in a disparate impact, and, if they do, whether they are tied to relevant job-related skills and consistent with business necessity;
  3. require your software vendors of these tools to disclose what steps they have taken to evaluate whether use of the tool may have a disparate impact and specifically, whether it relied on the four-fifths rule, or whether it relied on a standard such as statistical significance may also be used by courts;[3]
  4. ensure your vendor agreements have proper indemnification and cooperation provisions in the event your use of the tool is challenged;
  5. ensure your employees get proper training on how to use these tools; and
  6. if you are outsourcing or relying on a third party to perform selection procedures or act on your behalf to make employment-related decisions, require them to disclose their use of AI decision making tools so that you can properly assess your exposure.

Key Takeaways

As AI continues to evolve at an alarming rate, employers need to adapt in kind to ensure they are using technology in a responsible and compliant, nondiscriminatory manner. Although AI may speed up the selection process and even reduce costs, reliance on AI without proper diligence can be problematic. Employers, not the software developers and vendors, are ultimately responsible for ensuring a selection rate that is not “substantially” lower for one group of people. Employers need to remain critical of the methods they implement for selection, from the application stage all the way to separation and transfers. Employers should continue to audit their use of these tools and ensure their employee policy and vendor agreements are updated to minimize their exposure to liability under Title VII and other employment laws. If adjustments or changes are needed, employers should adapt and work with their vendors to ensure they are implementing the least discriminatory methods or can justify their decisions as job related and consistent with business necessity.

As always, Sheppard Mullin will continue to provide updates and insights on any developing legal trends related to employment and the use of AI technology.


[1] For a review of the New York City Automated Employment Decision Tools Law, click here.

[2] Citing Isabel v. City of Memphis, 404 F.3d 404, 412 (6th Cir. 2005).

[3] See Jones v. City of Bos., 752 F.3d 38, 50, 52 (1st Cir. 2014) (explaining that the four-fifths rule may be rejected when a test of statistical significance would otherwise indicate adverse impact, such as when there is a small sample size or in cases where the “disparity is so small as to be nearly imperceptible without detailed statistical analysis.”)