Does New York City’s LL144 Have A Dirty Little Secret?
Background on New York City’s LL144 Law:
July 3rd, 2023 was D Day for the controversial New York City law (AKA LL144) that will forever have an enduring place in the history of hiring regulation.
After a year or more of discussion, debates, edits, reconsiderations, and revisions, LL144 is now our reality. Unfortunately, at present there are more questions than answers regarding compliance with this freshly minted law.
Confusion or not, LL144 and emerging legislation of the same ilk are good things. Or are they?
What is New York City’s LL144 Law?:
LL144 is aimed at making hiring a more fair and equitable process. Who could argue with that? Diversity and inclusion are the right thing to do on many levels. Every individual deserves to have access to opportunity, and it has been shown time and again that more diverse companies are more successful.
Believe it or not, LL144 may not be as functional or pure as one would think when it comes to the moral and ethical purposes behind it. Could it be that LL144 has some dirty little secrets going on behind the scenes? We might just have a conspiracy theory on our hands here.
Key findings in New York City’s LL144 Law:
My first cue that something bigger might be going on behind the scenes came through an ah-ha moment I had when working on a client briefing related to LL144. This epiphany was that LL144 completely ignores validity (I.e., the job relatedness of a hiring tool). Last time I checked the purpose of any decision-making tool used for hiring is to predict how well an applicant will do on the job. Nothing more, nothing less. What does LL144 say about validity? Goose egg, nothing, nada, zilch.
To sanity check this revelation, I immediately reached out to some of my trusted colleagues and advisors to ask them if I missed something. Every one of them confirmed my fears.
How did this slip past me? Have we all been so caught up in how to navigate the ambiguity in these early days that we whiffed on this critical point? And critical it is, because the concepts of bias (i.e, Adverse impact) and validity are joined at the hip. While it is possible to look at one in the absence of the other, the practical issue of their value as a legitimate way to select employees requires both. The EEOC’s Uniform Guidelines on Employee Selection Procedures (AKA UGESP) which serve as the canon when it comes to legal compliance in hiring, say as much. The UGESP applies equally to any selection measure, be it AI based or paper and pencil.
Yes, the UGESP does state that adverse impact is allowed as long as a selection tool can be shown to be valid. But they also mandate that, in the presence of adverse impact, other alternatives must be evaluated and used whenever possible.
By not requiring validation as part of the mandated audit process, LL144 has missed a key point. Any best practices-based guideline related to hiring tools should require both validation and adverse impact studies. So sayeth the federal government, and pretty much 100% of the IO psychologists who specialize in developing and validating employee selection tools.
When taking this reality in, I felt myself wondering if the lack of requirements for validation may be a symptom of a deeper darker secret that LL144 is hiding?
I quickly began looking for more information and stumbled upon an eye opening article written by Mathew Scherer, who serves as Senior Policy Counsel for Workers' Rights and Technology at the Center for Democracy and Technology. The CDT is “the leading nonpartisan, nonprofit organization fighting to advance civil rights and civil liberties in the digital age.”
According to Mr. Scherer (and many others), LL144 represents a tug of war between public interest groups who feel it has no teeth and business groups who claim it is obtuse and impossible to enforce. The CDT intonates that business interests have prevailed in the tug of war, wielding influence that has led to the law being significantly watered down, making it easier for corporations to comply with much less oversight.
What does New York City’s LL144 Law mean for organizations using artificial intelligence in their hiring processes?:
In all the material I have read and studied over the past year, I never once thought that there might be an alternate agenda at play when it comes to LL144. Before I put on a tinfoil hat, I took a look at the facts as they relate to the law as it is presently enforced.
Bias audits- LL144 requires audits to include only one statistical test for bias, the 4/5th rule. While this rule is the poster child for bias audits, it is far from the only statistical test for bias. In fact, any labor attorney will tell you, best practices in compliance dictate the use of several additional analytical procedures.
Definition of what constitutes an Automated Decision-Making Tool (aka AEDT)- This is perhaps the biggest area of change and dilution. In its present, revised state, LL144 has narrowed the definition of what constitutes an AEDT significantly, to the exclusion of a number of tools, including those offered by some of the largest providers of AI driven recruiting software.
According to Scherer,
“The department ignored the civil society groups’ objections and made the change requested by the corporate lobbying groups in its revised rules. This change could effectively neuter LL 144, making it difficult for workers to prove that a tool qualifies as an AEDT under the law unless the tool completely replaced human decision-making in the hiring process. Under the department’s revised proposal, LL 144 may not cover A.I.-powered hiring tools that make recommendations that human decision-makers usually (but don’t always) accept, despite the fact that such tools would affect many workers’ careers and livelihoods.”
According to, Alexandra Givens, president of the CDT,
“That leaves out the main way the automated software is used, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.”
Protected classes covered under the law- LL144 does not apply to discrimination based on age or disability.
Explainability- LL144 is agnostic to how an algorithm makes decisions. This concept is known as “explainability, and it is crucial to understanding bias.
“The focus becomes the output of the algorithm, not the working of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute.
According to the business interests, these changes were made to make LL144 more enforceable. This is a legit point given that the original version was definitely more complex and confusing.
But at what cost?
If you believe the conspiracy, policy makers are using LL144 to take direct aim at the EEOC’s UGESP, seeking to replace it with less stringent guidelines. Their rationale is that the UGESP is out of date and not equipped to handle AI based tools.
But the thing is that the UGESP’s foundational principles apply equally to any sort of hiring tool, AI or not. Diluting them would be a step backwards.
It seems that replacing the UGESP with a less restrictive set of regulations is a very long and uncertain game plan given the tidal wave of similar legislation happening in many other US states, including very liberal trend setters such as California. One would like to think that the sum total of all these efforts will be pure, engendering clear, enforceable legislation that helps both workers and businesses alike.
So, is there a real conspiracy at foot here? It is hard to know for sure. But, in some sense it should not matter because the best game plan is for employers to transcend the fear of enforcement and create policy and practice aimed at ensuring fair and equitable hiring and adherence to AI governance frameworks such as the CDT’s Civil Rights Standards for 21st Century Employment Selection Procedures.
The standards’ provisions are aimed at detecting and preventing discrimination by:
- Requiring that all selection tools be tied to essential job functions.
- Mandating regular audits to ensure tools are effective and accurate both at and throughout the period employers use them.
- Ensuring that companies select the least discriminatory assessment method available.
- Banning certain tools that pose a particularly high risk of discrimination, such as tools that test workers by analyzing their faces or testing their personalities.
By voluntarily subscribing to standards like these and holding themselves accountable for the ethical use of AI in hiring, companies can rise above the politics and police themselves!
It will be fascinating to see how this drama plays out. I’ll be watching from the Grassy Knoll.