Mitigating AI discrimination in hiring
Many businesses have welcomed artificial intelligence into their hiring and employee management processes. AI tools, like résumé screeners and recruiting crawlers, are now standard features in platforms like Workday, LinkedIn, and Indeed — you know, the ones randomly sponsoring PGA players during the Masters.
These platforms have optimized recruiting and are now indispensable for most businesses. But behind the convenience may lurk a platform prone to discriminate.
The wake-up call: Workday in court
On May 16, a federal court in California preliminarily approved a class-action lawsuit against Workday, accusing it of age-based algorithmic discrimination. The lawsuit claims Workday’s AI screening tools systematically disadvantaged older applicants. For instance, plaintiffs (the people suing) say Workday rejected hundreds of applications within hours of them being submitted, often at odd times when no human would likely be reviewing them. They also argue that, for many of these applications, no legitimate reason existed for rejection.
If Workday is found to have caused this kind of widespread harm, millions of Americans could be entitled to compensation under a disparate impact liability theory.
Wait, didn’t the president discard this type of discrimination?
Hardly. Disparate impact occurs when an employer — or in Workday’s case, an employer’s agent — applies a supposedly neutral standard, but the neutral standard disproportionately harms a protected group. On April 23, the president issued an executive order purporting to dismantle disparate impact discrimination across the United States.
The executive order declared that disparate impact — a theory originally designed to prevent discrimination — was itself discriminatory. Left unsaid, however, was that the Supreme Court recognized disparate impact in 1971, and Congress later embedded disparate impact into Title VII. Executive orders cannot reverse Supreme Court precedent or unmake enacted law.
So, while the Equal Employment Opportunity Commission will no longer recognize disparate impact, it remains a form of unlawful discrimination that companies like Workday must address.
Is this just a tech glitch, or can AI really discriminate?
It absolutely can (and sometimes does). AI’s tendency to discriminate isn’t breaking news. Years ago, the EEOC called algorithmic discrimination the “new discrimination frontier.” The American Civil Liberties Union also warned that many AI tools, despite being marketed as objective, “pose an enormous danger of exacerbating existing discrimination in the workplace based on race, sex, disability and other protected characteristics.”
Academic, peer-reviewed studies have found similar problems. One study discovered that large language models often associate successful women with traits like “empathy” and “patience,” while linking successful men with “knowledge” and “intelligence.” (Zhao et al., Gender Bias in Large Language Models Across Multiple Languages). Another study found that sentences with “young-sounding” adjectives were 66% more likely to be scored favorably by a GenAI model than identical sentences using “old-sounding” adjectives. (Diaz et al., Address Age-Related Bias in Sentiment Analysis). Apparently, the word “distinctive” is an “old-sounding” adjective in our lexicon.
So, what now? Practical steps for businesses
My aim isn’t to reprove AI or platforms like Workday. Even if that was my objective, we’ve crossed the Rubicon on this technology. AI is here, and it will continue to permeate every aspect of the workplace. But as an employment law attorney, I have a few key recommendations:
Inspect what you expect: No business leader expects AI to discriminate. Realizing that expectation, however, requires careful vetting of AI platforms and scrutiny of vendor methodologies. If a vendor cannot provide its latest bias audit, move on. Ask if the vendor can provide clear explanations for how candidates are assessed and why recommendations are made.
Be ready to listen and act: Every business should regularly conduct bias audits of its AI practices (ideally, keep these confidential with legal counsel). It’s also smart to set up systems where applicants or employees who feel snake-bitten by AI can appeal to a human for review. Simply trusting AI to get it right isn’t enough, and saying “the algorithm did it” will never be a valid defense in a discrimination lawsuit.
Pay attention to indemnification clauses: In a perfect world, your AI vendor would indemnify you for damages caused by algorithmic discrimination. Indemnification is becoming rarer, especially now that vendors like Workday are facing liability as an employment agency or a business agent. At all costs, avoid indemnifying the vendors. Your contracts with vendors should also address bias audits. For example, what happens if the platform fails a bias audit right after you’ve used it for a hiring round? Will indemnification apply then?
Prompt carefully – your words matter: Even the best AI system can discriminate if prompted, even accidentally. Be careful with the language in your job postings and other prompts. Avoid words like “energetic,” “enthusiastic” or “dynamic” because an AI recruiting assistant might interpret these as a nudge to find younger employees. In other words, avoid using those “young-sounding” adjectives.
Check your insurance policies: Does your current insurance cover algorithmic discrimination? Whether it’s a general employment policy or a specific employment practices liability Insurance policy, every business using AI for recruiting and employee management should review its coverage.
While AI offers transformative potential, the risk of algorithmic discrimination is a critical challenge that employers cannot afford to ignore. Embracing proactive strategies is essential for businesses aiming to leverage AI ethically and shield themselves from costly discrimination claims.
Brian Bouchard, a shareholder at Sheehan Phinney based in Portsmouth, is a seasoned litigator and counselor whose practice area includes labor and employment, real estate and construction law.