This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minutes read

Automation & Employment Discrimination

Employers are increasingly using some form of Artificial Intelligence (“AI”) in employment processes and decisions. Per the Equal Employment Opportunities Commission (“EEOC”), examples include: 

[R]esume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides ‘job fit’ scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived ‘cultural fit’ based on their performance on a game or on a more traditional test. 

EEOC-NVTA-2023-2, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, Equal Emp’t Opportunity Comm’n, (issued May 18, 2023) (last visited November 17, 2023).

As the agency tasked with enforcing and promulgating regulations regarding, federal antidiscrimination laws, the EEOC is concerned how AI could result in discrimination in the workplace. While AI can be present throughout the work lifecycle, from job posting through termination, the EEOC has devoted particular attention to applicant and application sorting and recommendation procedures. If not executed and monitored correctly, the use of AI in these processes could result in discriminatory impact (a.k.a. disparate treatment) for certain protected classes. These claims arise when employers use facially neutral tools/policies, but the application of the same results in some different treatment or impact on a particular protected class. 

For example, if an application sorting system automatically discards the applications of individuals who have one or more gaps in employment, the result could be that women (due to pregnancy and childbirth-related constraints) and applicants with disabilities are rejected at a higher rate than males and “able-bodied” applicants. In this circumstance, the employer “doesn’t know what it doesn’t know” and would likely be unaware that some women and applicants with disabilities were pre-sorted before review. While the employer may not have intended for this outcome, it could nevertheless be found to have violated Title VII of the Civil Rights Act (“Title VII”) and the Americans with Disabilities Act (“ADA”). 

Ironically, many employment AI tools are marketed as bias-eliminating because some can operate through data de-identification—a process by which protected class information is removed from application information. For example, as a general matter, applicants with “ethnic sounding” names are less likely to receive call backs than those with Anglican sounding names, like the John Smiths of the world. By replacing application names with numbers, implicit bias is less likely to creep in. 

Although data de-identification is one tool for avoiding bias in employment decisions, it is not a cure-all and can sometimes backfire. For example, data de-identification could result in an employer being ignorant of the disparate impact caused by its policies. Take the name example. Names often are not the only indicators of race or culture. Presume the HR professional reviewing applications is not well versed in Historically Black Colleges and Universities (“HBCU”), and when the professional does not recognize the name of an HBCU on “John Smith’s” application, she moves it to the bottom of the pile. Of course, there are other more subtle race/ethnicity data points that could trigger subconscious bias (e.g., residence, prior job experience, etc.). While there is not one solution to this complex problem, auditing application systems can at least put the employer on notice that something may need to be changed. 

When developing an AI utilization strategy, employers must be mindful of the complexity of both the systems and the law. 

At Sherman & Howard, our AI Task Force is here to provide guidance to facilitate compliance and mitigate risk by working to stay abreast of these new developments and the implications for our clients. This article is part of a series of client advisories that will discuss the potential benefits and drawbacks of generative AI in the workplace, relevant statutory and regulatory guidance, litigation risks, and other critical issues. If you have questions about AI policies, please contact your Sherman & Howard attorney, and if you would like to subscribe to our AI advisories, click HERE.

Tags

artificial intelligence, labor & employment