We have previously written on New York City’s law requiring audits for employers and employment agencies (together, “Employers”) who use automated employment decision-making tools (“AEDTs”) to assist in making employment decisions. A recent ruling by a California court in Derek Mobley v. Workday, Inc. (Case No. 23-cv-00770-RFL) holds that Employers may face liability under federal law for their use of AEDTs. If other courts follow suit, this development could impact Employers across the country.
Derek Mobley v. Workday, Inc. Case: A Turning Point?
Mobley alleged that he is African American, over 40 years of age and disabled, and that he (unsuccessfully) applied to over 100 positions using Workday’s online platform. Workday (NASDAQ:WDAY) is an on‑demand (cloud-based) financial management, human capital management company. Mobley alleged that Workday’s AEDTs, which implement artificial intelligence and machine learning, discriminated against him based on race, age, and disability. For example, Mobley alleged that Workday’s algorithmic decision-making tools “rely on biased training data and information obtained from pymetrics and personality tests, on which applicants with mental health and cognitive disorders perform more poorly.” (Order at 13).
The court rejected the argument that automated tools should be treated differently from human decision-makers in this context, stating: “Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one.” (Order at 10). Applying this principle to Mobley’s allegations, the court explained, “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being” (Order at 10).
The court distinguished disparate treatment from disparate impact discrimination for purposes of Mobley’s claims. The court dismissed Mobley’s disparate treatment, or intentional discrimination theory, noting that that Mobley’s operative complaint “lacks allegations supporting that Workday intended this outcome” of discrimination (Order at 18). However, disparate impact discrimination does not require intent. Rather, the court explained, Mobley had to “(1) show a significant disparate impact on a protected class or group; (2) identify the specific employment practice or selection criteria at issue; and (3) show a causal relationship between the challenged practices or criteria and the disparate impact.” (Order at 13.) With the AEDTs satisfying the first requirement, the court concluded that Mobley had successfully alleged disparate impact discrimination.
Implications for Employers and Vendors
This ruling has significant implications for both vendors and Employers using AEDTs. The court applied settled principles of disparate impact discrimination to the AEDTs without citing to any prior decisions doing so in the AEDT context. Thus, Mobley may signal increased scrutiny and potential liability for Employers and vendors using AEDTs under traditional disparate impact analysis.
In addition, new theories of liability may emerge. For example, as of 2024 several U.S. states have enacted or proposed legislation to regulate various aspects of AI, including its use in employment decisions. California’s proposed Artificial Intelligence Accountability Act (SB 896) requires state agencies to report on the benefits and risks of generative AI. Connecticut’s privacy law gives consumers the right to opt-out of profiling related to automated decision-making. And we have already mentioned New York City’s law. These state- and local-level efforts may reflect a growing trend towards AI governance in this context.