chat
expand_more

Chat with our Pricing Wizard

clear

Advice for Employers and Recruiters

Are algorithms used by LinkedIn, iCIMS, Workday, and other HR tech vendors discriminatory?

October 30, 2025


I’ve spent most of my working life inside the job board and HR tech world. I’ve seen how tools that promise fairness can still tip the scales. Not always on purpose. Not always in the same way. But when technology becomes the gatekeeper, small choices ripple out fast. They decide who gets seen, who gets ignored, and who never even gets a chance.

Lately, three stories have been hard to ignore. People say LinkedIn’s algorithm is holding back posts and comments from women. A long-running lawsuit argues Workday’s screening tools have pushed older workers, people with disabilities, and people of color to the side. And a newer case says an employer’s use of iCIMS and related AI tools blocked Black applicants. These are big claims. They won’t be settled overnight. Still, they matter right now—because they touch the daily systems that thousands of employers and millions of job seekers rely on.

Let me be clear about my role. I’m not a judge. I’m the founder of a job board who believes that every student and recent grad deserves a great career. My lens is practical. If there’s a risk, name it. If there’s a fix, use it. If there’s harm, stop it. That’s the spirit behind this piece.

Before we go deeper, I want to bring in a friend. Martyn Redstone writes the H.A.I.R. e-newsletter and is one of the foremost experts on how AI is being used, and misused, in HR technology. He recently wrote about the LinkedIn allegations. His reporting and commentary have helped a lot of industry folks connect the dots. I’ve heard similar stories for years in side chats at conferences and DMs from community members. Martyn’s work put it on the record in a way that’s hard to wave away.

What people are alleging

Let’s start with LinkedIn. The short version is that many women say their posts and comments don’t travel as far as similar content from men. Some say posts about gender equity or DEI get flagged, throttled, or even removed. Others describe “false positive” takedowns where a post advocating inclusion gets labeled as discriminatory, only to be restored after an appeal. Are these isolated mistakes? Or is there something systematic in how the feed ranks, filters, and moderates? That’s the heart of the concern.

Martyn’s H.A.I.R. piece pulled together user tests, screenshots, and first-hand notes about this pattern. It doesn’t prove that every woman’s content is suppressed all the time. No single article could. But it gives weight to the lived experience many users, especially women in our space, have been reporting. For a platform that shapes professional reach and reputation, even a modest, consistent tilt can change careers.

Now Workday. The Mobley case has become the test bed for a big legal question: when a vendor’s AI helps decide who moves forward, can that vendor be held to the same anti-discrimination rules as the employer? The plaintiff says yes, arguing Workday’s tools created disparate outcomes for protected groups. Workday says no, arguing its customers control the settings and make the choices. The court has allowed key parts of the case to continue. That doesn’t decide the facts. It does say the questions are serious enough to be heard.

Then there’s iCIMS. According to FairNow, which was recently acquired by AuditBoard, the complaint targets an employer, SiriusXM, but calls out the alleged use of iCIMS and AI-enabled screening in ways that may have disadvantaged Black applicants. iCIMS is not the defendant in that case, yet it’s central to the story. That’s a warning to every vendor and every employer: even if your logo isn’t on the lawsuit, your product and your process may still end up in the spotlight.

Are these companies illegally discriminating?

I’ll lay out fair arguments on both sides. The point here isn’t to pick a winner. It’s to help leaders see the shape of the debate and the risks that come with it.

The argument that the systems are discriminating

One of the strongest claims is about disparate impact. In U.S. law, a practice can be unlawful even without bad intent if it hits a protected group harder and isn’t truly necessary for the job, or if there’s a less discriminatory alternative. If an AI model screens out women, or older workers, or disabled applicants at higher rates, that can be enough. The Workday case leans on this. So does the SiriusXM complaint.

Another claim is about proxies. AI loves patterns. It does not understand people. It will grab any signal that helps it predict a “good hire,” even if that signal is really a stand-in for race, sex, disability, or age. ZIP code can map to race. Gaps in employment can map to disability. Certain schools and employers can map to class and ethnicity. If your data bakes in a biased past, your model will serve a biased future unless you work hard to prevent it.

There’s also the agency angle. If a vendor’s tool gates access to interviews and the employer largely relies on that gate, then the vendor acts like an agent. That matters. It means the vendor can be held to the anti-discrimination standards that apply to the hiring process. Courts and regulators have been open to that argument in recent years.

Finally, think about platform duty. In the EU, LinkedIn falls under tough rules for very large platforms, including duties to examine and reduce systemic risks like discrimination. Even outside the EU, the ethical bar is the same: if your algorithm shapes who gets heard at work, you need to make sure it isn’t turning down the volume on protected groups.

The argument that the systems are not discriminating

On the other side, companies say their tools are configurable and advisory. Employers choose the settings. Employers decide who to interview and hire. If that’s true in practice, vendors argue, they shouldn’t be blamed for how customers use the product.

Another important point: correlation is not causation. A post might get less reach because of network effects, timing, or topic fatigue, not because the poster is a woman. A candidate might get rejected because of clear job requirements, not because of their race or age. Sorting this out takes careful analysis. Anecdotes help us see a problem, but they don’t settle it.

And then there are false positives. Automated moderation makes mistakes. When it flags a DEI post as “discriminatory” and later restores it, that’s a broken process. It’s not automatically illegal discrimination. To rise to that level, you’d need to show a pattern, a protected class harm, and a failure to fix it once known.

Finally, the EU rules and state-level U.S. rules are mostly about process and governance. They require risk management, documentation, audits, explainability, and human oversight. A company can have disparities and still comply—if it can justify why a feature is necessary and show it tried to find safer alternatives.

Why this matters to employers who aren’t in the headlines

You may not work at LinkedIn, or use Workday, or run iCIMS. You’re still in the blast radius. Here’s how that plays out in real life.

Legal risk rises for everyone. If a plaintiff’s lawyer wants to test your hiring system, they will ask for logs, audit results, model documentation, and outcomes data. If you can’t produce them, you look careless. If the numbers show unexplained gaps, you look worse. Even if you did nothing intentionally wrong, the costs add up: subpoenas, expert reports, settlement talks, and months of distraction.

Policy risk rises too. New York City has rules around bias audits for automated hiring tools. California has new regulations that cover AI in employment under state civil rights law. The EU has sweeping AI rules for high-risk tools used in hiring. These aren’t academic. They dictate how you buy software, how you govern it, and how you prove you’re using it responsibly.

Brand risk might be the biggest cost of all. Candidates don’t separate your vendor from your company. If your career site is a black hole, they blame you. If a community believes your process filters out people like them, they warn their friends. That kills referral pipelines. It eats into offer acceptance rates. It hurts retention. And it’s hard to rebuild once trust is gone.

Operational drag is real. Procurements slow down while legal teams negotiate AI clauses. Recruiting teams spend time on audits instead of sourcing. Product teams at vendors reroute roadmaps to fill governance gaps. All of that pushes hiring cycles longer and makes each hire more expensive.

What I think is fair to say today

On LinkedIn, I believe the concerns from women and DEI leaders are credible and deserve full, transparent investigation. I also believe the scale of the platform makes even small biases a big deal. If an algorithm is a microphone for your professional life, it should not turn quiet when women speak. The fix is not a press release. It’s measurement, mitigation, public reporting, and a clear way to appeal mistakes that works the first time—not after a pile-on.

On Workday, the case is still in motion. The fact that it has cleared early hurdles means the questions are serious. Can a vendor be treated like an agent when its product gates access to jobs? The court is saying, “maybe—let’s see the facts.” For the industry, the lesson is already here: if your tool touches selection, build compliance into the product, not the marketing.

On iCIMS, even though the employer is the defendant in the SiriusXM case, the vendor is part of the story. That’s the new normal. Discovery will pull in vendor docs. Buyers will ask for proof of fairness by job family and geography. Sales cycles will lengthen unless those answers are ready.

The legal, brand, and human costs—company by company

LinkedIn

LinkedIn is the professional town square for a lot of us. If women’s voices get less reach, the harm is bigger than one post. It affects who becomes a thought leader, who gets invited to speak, who gets recruited, and whose ideas spread. The legal piece in the EU focuses on platform duties to manage systemic risks like discrimination. The brand piece is about trust. If professional women feel like the system doesn’t hear them unless they shout, they’ll stop shouting. Or they’ll go elsewhere. Neither is good for LinkedIn or the people who rely on it.

Workday

Workday sits at the center of many enterprise hiring stacks. The legal risk is obvious: if a court finds that its tools caused disparate impact and that the company acted like an agent of its customers, liability spreads. The brand risk is about confidence. HR leaders want to believe that adopting an enterprise tool makes them safer, not more exposed. If you sell “trust,” your documentation has to be more than a sales deck. It has to be audit-ready, job-family-specific, and current.

iCIMS

iCIMS is a household name in applicant tracking. When its features show up in a discrimination complaint, even as part of an employer’s process, the story still sticks. The practical response is simple but not easy: make sure customers can pull bias-testing reports by role, location, and date range; make sure the logs exist; and make sure the product gives talent teams safe defaults and clear warnings when risky features are enabled.

What employers should do right now

Take inventory. Write down every point in your funnel where an algorithm makes a call: who sees the ad, who gets matched, who gets scored, who gets scheduled, who gets rejected without a human reading the resume. If you can’t map it, you can’t defend it.

Ask vendors for evidence, not promises. You want training-data notes, feature lists, explainability summaries, adverse-impact test results, and live monitoring plans. You want to know what changes when you tune the model, and what breaks when you turn a feature off. You also want to know what the vendor will give you in discovery if a case hits. If it isn’t in writing, assume you won’t get it.

Test outcomes end-to-end. Don’t just audit a tool; audit your flow. Where the law allows, study outcomes by protected class or by lawful proxies that your counsel approves. Look for gaps by stage. If most of your disparity is from ad delivery or sourcing, don’t waste time tuning late-stage models. Fix the top of the funnel first.

Cull risky features. If a signal is a proxy for race, sex, disability, or age, you need a very strong, job-related reason to keep it. Many teams can hit the same quality goals by removing or down-weighting a few bad features. That’s especially true in high-volume roles where simple rules plus human review often beat fancy scores.

Add human judgment where it matters most. Automation should recommend. People should decide. Give recruiters an easy, logged way to override any automated reject. Require a short reason. Those notes become your safety net when you need to show that a person—not a black box—made the call.

Tighten your contracts. Bake in audit rights. Require vendors to keep logs long enough to matter. Spell out incident reporting timelines when bias is found. Put in place indemnities that match the real risk, not a boilerplate guess. If you hire in New York City, California, or the EU, say so. Make the vendor warrant that the product supports those rules.

Plan your “bad day” communications now. If a story breaks or an audit finds a gap, you’ll need clear messages for candidates, employees, and customers. Own the issue, state the fix, and set a timeline. People forgive mistakes. They don’t forgive hedging.

A note on criminal vs. civil risk

Most hiring bias exposure is civil, not criminal. You’ll see lawsuits, agency actions, fines, and consent decrees. That said, lying to regulators, destroying evidence, or ignoring court orders can raise the stakes. In some countries, certain discriminatory acts can also trigger criminal penalties. The safe path is simple: be honest, keep records, and fix what you find.

The bigger picture

The tools we use are getting smarter. They’re also getting closer to the core of our decisions. That’s a good thing when they make work fairer and faster. It’s a bad thing when they turn old biases into new code and scale them to millions.

Here’s my bottom line. If your system acts like a gatekeeper, the law, the market, and your conscience will treat it like a gatekeeper. For a social platform, that means checking whether your feed turns down women’s voices and fixing it if it does. For HR tech vendors, that means building bias checks, documentation, and human oversight into the product by default, not as an add-on. For employers, that means asking harder questions, logging more than you used to, and measuring real outcomes, not just vendor demos.

I’m grateful to friends like Martyn Redstone for shining light on where these systems fall short and for pushing all of us to do better. The way forward isn’t mystery. Measure. Mitigate. Monitor. Tell the truth about what you find. Then do the work.

If we get this right, more people will be heard on platforms like LinkedIn, more qualified candidates will be seen in our hiring funnels, and fewer careers will be decided by silent rules no one voted for. That’s worth the effort. That’s the future I want for the students and recent grads who count on job boards like ours—and for everyone else who just wants a fair shot.

Request a Demo

For prompt assistance and a quote, call 952-848-2211 or fill out the form below.
We'll reply within 1 business day.

First Name
Last Name
Optional: Please enter a phone number where you can be reached.
Please do not use any free email addresses.
Submission Pending

Related Articles

No Related Posts.
View More Articles