Advice for Employers and Recruiters
The False Choice of AI vs. Humans in Hiring: Invest in Both, Not One or the Other
There’s a new pitch making the rounds in HR tech, and it’s a doozy. Work technology vendors are claiming that their AI-powered hiring solutions are what job seekers prefer over dealing with “incompetent” human recruiters and hiring managers. In other words: “Hey, candidates would rather be ranked, interviewed, even hired by our AI than by your bumbling staff!” It’s a clever selling point, tapping into frustrations many job seekers share about poor hiring experiences.
But let’s pump the brakes and examine that claim. After three decades in the recruitment industry (yes, I started College Recruiter three decades ago), I’ve seen hype cycles come and go. And this notion that we must choose between inept humans or hyper-efficient AI in hiring? It’s a false choice – a classic example of a solution framing the problem to fit its narrative.
Sure, plenty of candidates have horror stories of inattentive interviewers, biased gatekeepers, or clueless hiring managers. Nobody likes being ghosted or misjudged by a human who hasn’t done their homework. That doesn’t mean an algorithm automatically becomes the preferred alternative. The real issue isn’t humans vs. machines at all – it’s competence vs. incompetence. And competence doesn’t depend on whether it’s an AI or a person; it depends on investment, training, and intent.
So before we hand over the hiring keys to the robots, let’s break down why this “AI versus bad humans” framing is not just misleading, but dangerous. If an employer can’t be bothered to invest in making their people better, why on earth would we believe they’ll invest properly in an AI system? And conversely, if they do invest in a quality AI, why wouldn’t they also invest in ensuring their human team is top-notch? It’s time to bust this myth and talk about what job seekers really want – and how smart employers can deliver it by investing in both technology and people.
AI vs. Incompetent Humans: A False Dichotomy
Let’s start with the core of the vendor pitch: “Our AI will treat your candidates more fairly and efficiently than your team does.” The scenario painted is usually one where the human hiring process is slow, biased, and inconsistent – which, to be fair, it often can be. We’ve all heard (or experienced) tales of the harried recruiter who spends 6 seconds scanning each résumé, or the hiring manager who asks bizarre interview questions and makes gut decisions on a whim. Candidates understandably get frustrated with those experiences.
HR tech vendors have seized on that frustration. I’ve sat through demos where a salesperson all but says, “Your recruiters are overwhelmed and your hiring managers are biased; our AI never gets tired or prejudiced. Candidates will prefer dealing with a neutral algorithm over an inconsistent human.” They might even cite surveys or anecdotal evidence: for example, reports of candidates finding an AI-driven interview “fairer and more comfortable” than a rushed phone screen, or that scheduling with a chatbot is easier than chasing a busy recruiter. On the surface, it sounds convincing. After all, who wouldn’t want a fair, fast, and friendly hiring process?
The problem is the false dichotomy. The vendor is presenting two options as if they’re the only choices: Option A: a flawed human experience rife with bias and inefficiency; Option B: a shiny AI that fixes all those issues. What they’re not mentioning is Option C: a well-trained, well-supported human team augmented by technology – in other words, investing in improving your people and using tools wisely.
It’s a bit like a car salesman saying, “Do you want this old clunker that barely runs, or this brand-new self-driving sports car?” If those were truly your only choices, sure, the sports car sounds great. But maybe the real answer is to tune up the engine you have and teach the driver to be better. In hiring terms: train your recruiters and managers, improve your processes, then use technology to assist – not to cover up the problems.
By framing it as AI vs. incompetent humans, vendors conveniently ignore that human incompetence is a solvable problem. People can learn. Processes can improve. Bias can be addressed through training, accountability, and better hiring practices. It’s not an immutable law that recruiting teams must be inept. So the question savvy leaders should ask the vendor is: Why do you assume my team will always be incompetent? If an employer is willing to settle for a chronically inept hiring team, that’s a bigger issue that no AI can fully fix.
In short, the “AI vs. humans” debate is a red herring. The real goal should be good hiring vs. bad hiring. And good hiring comes from competent people using effective tools. Anything less is a false choice.
Bad Employers, Bad AI: If You Won’t Invest in People, You Won’t Invest in Tech (Properly)
Now let’s tackle the first rhetorical question: If an employer is unable or unwilling to invest in its people, what makes us think they’d be both able and willing to invest in good AI?
This is huge. Implementing a good AI solution is not like flipping a switch. It’s not, despite what some slick brochures imply, a matter of buying some software and instantly getting better hires. Quality AI in hiring requires investment – not just money, but time, strategy, and continuous effort. You need to train the AI (often on your company’s data), tune it, and monitor it for fairness and accuracy. You need to integrate it with your processes and probably retrain your people on how to work with it. In essence, you need to invest in it almost as much as you would invest in developing an employee.
Now, consider the employer who isn’t willing to invest in their hiring team. Maybe they skimp on recruiter training, or they don’t bother giving hiring managers guidance on effective interviewing. Perhaps they have a “warm body” approach – just fill seats quickly, skills and candidate experience be damned. That organization, historically, hasn’t invested in the human side of hiring.
Why would we believe that the same employer will suddenly do a great job investing in AI? If they’re cheap or short-sighted about people, odds are they’ll be cheap and short-sighted about technology too. They might grab the first AI tool that promises a quick fix, turn it on without proper calibration, and then wonder why their results aren’t magically better (or even why they’re worse).
In fact, we’ve already seen cautionary tales of bad AI implementation. Think of the infamous case where a major tech company tried using an AI resume screener, only to discover it was amplifying bias, because it was trained on biased historical data. That AI ended up favoring male candidates for technical roles simply because the company’s past hiring had skewed male. How did that happen? The tool wasn’t inherently “evil,” but it was implemented without enough oversight or understanding. In other words, the people behind it didn’t invest the effort to spot the bias or fix the data. They wanted a plug-and-play solution to a complex human problem. Sound familiar?
Or consider companies that implement AI video interviews to automate screening. If they don’t invest in candidate experience, they might not warn candidates that an algorithm, not a person, will analyze their video. They might not ensure the AI is tested for bias against different accents or skin tones. So candidates end up talking to a blank screen, later receiving a canned rejection with no human contact, perhaps filtered out by an opaque algorithm. Is that really better than an untrained manager? It’s trading one bad experience for another. The common thread is the lack of investment in doing it right.
The point is, a company that cuts corners with humans will cut corners with AI. Those vendors implying, “Just buy our AI and skip the messy human improvements” are selling snake oil. Because good AI requires good management. If you implement an AI without investing in quality – in training it, maintaining it, evaluating it – you’re essentially unleashing an automated, turbocharged version of your broken process. There’s an old saying in tech: “Garbage in, garbage out.” If your hiring process is garbage, automating it with AI just helps you produce garbage faster and in greater volume.
I often think of the phrase, “A fool with a tool is still a fool.” (Software pioneer Grady Booch gets credit for that one.) If leadership is foolish about hiring, an AI tool won’t miraculously make them wise. It will just allow them to make mistakes at scale. Bad employers + AI = bad AI. So the real fix has to start with changing the employer’s mindset on investing in people and processes. Otherwise, adopting AI is just putting a shiny bandage on a deep wound.
Good AI Needs Good People: Why Investing in One Should Mean Investing in Both
Now for the flip side: If an employer is able and willing to invest in good AI, what makes us think they’d be unwilling to invest in ensuring their people are good? In reality, the best organizations are not choosing one over the other. They know that technology and talent are complementary.
Think about the kind of company that successfully implements an AI hiring tool. For the AI to be “good,” as we noted, they have to put resources into it. That means they care about hiring outcomes and efficiency enough to spend money and time on cutting-edge solutions. Those kinds of companies usually also care about their employer brand, their candidate experience, and the effectiveness of their recruiting team. It would be pretty contradictory for a company to pour money into a sophisticated AI platform, yet leave the people who work alongside it untrained or incompetent.
In practice, companies that are early adopters of AI in recruiting often do a lot of change management and upskilling of their staff. They’ll pilot the AI with trained recruiters overseeing it, gather feedback from candidates, and train hiring managers on interpreting AI-driven assessments properly. Why? Because they want the AI to augment human decision-making, not replace it. They see it as a tool to empower their people, not render them obsolete.
I’ve spoken with talent acquisition leaders at firms using AI for initial interview screening or resume ranking. The smart ones say things like, “It frees up our recruiters to spend more time with the best candidates, instead of slogging through every resume.” Notice: the goal was to give recruiters more time for human connection by automating the grunt work. But that only works because they have skilled recruiters to begin with – recruiters who know how to build relationships and evaluate nuanced qualities that AI might miss. The AI handles the first pass, and the humans still make the final call and provide the personal touch.
In other words, these organizations invest in both the tool and the talent. They don’t assume the tool replaces talent. They also invest in their people to effectively use the tool. This dual investment creates a virtuous cycle: good people make the AI better, and a good AI makes the people even more effective. The end result is a hiring process that is both high-tech and high-touch.
Contrast that with the hypothetical company in the vendor’s story – the one with “incompetent people.” If, by some miracle, that company buys a great AI and actually implements it well, guess what? They are no longer a company with incompetent people! They had to develop some competence to get that far. Either they hired better people or trained the ones they had, or they won’t get value from the AI. Any way you slice it, good AI in the real world tends to go hand-in-hand with a culture that values improvement and learning. And such a culture wouldn’t tolerate incompetent hiring practices from humans either.
So this scenario, where a company invests big in AI but refuses to improve its human side, is pretty far-fetched. It’s theoretically possible, but it wouldn’t stay that way for long – either the AI initiative fails due to lack of human support, or the company’s culture shifts to value talent development as part of making the AI work. Either way, the dichotomy collapses.
The takeaway: Investing in advanced hiring technology should signal a broader investment in excellence. Employers should strive for “Option C” that we discussed: competent humans plus capable AI, working together. That’s how you actually deliver the fairness, speed, and accuracy that vendors promise – not by choosing one over the other, but by elevating both.
What Do Job Seekers Really Want? (Hint: It’s Not a Robot or an Idiot)
We should also address the claim that started this whole debate: “Job seekers would prefer to be ranked, interviewed, and selected by AI rather than by incompetent people.” Do candidates really say that? And if they do, what do they actually mean?
It’s true that candidates are fed up with bad hiring practices. Who wouldn’t be? If the choice is between a disrespectful interviewer who barely hides their bias, or an AI that at least treats everyone consistently, some candidates might indeed choose the AI. There are surveys suggesting younger candidates, in particular, are open to AI-driven hiring if they believe it removes human bias and randomness. One well-publicized study even found that over half of workers (especially Gen Z) said they might trust an AI algorithm over a human manager to make objective decisions, because they’ve seen how subjective and unfair humans can be. This is a serious indictment of how poorly some organizations have managed their hiring and management practices. It’s a wake-up call: people crave fairness and transparency, and they’ll take it wherever they can get it.
However, let’s be very clear: “prefer AI over incompetent humans” is not the same as “prefer AI over any humans.” It’s not an absolute love for AI; it’s a reflection of how low the bar has been set by some human experiences. When a candidate says, “I’d rather deal with a bot than this jerk of a hiring manager,” the real message is: “I want a process that is fair, respectful, and efficient. I don’t care if it’s delivered by a person or a machine, as long as I’m treated right.” They are not expressing some deep-seated desire to eliminate humans from the process; they’re expressing a desire to eliminate the dysfunction from the process.
In fact, other data shows a lot of skepticism from job seekers about AI-only hiring. According to a 2023 Harris Poll for the American Staffing Association, nearly half of job seekers said they believe AI tools in recruiting could be more biased than human recruiters. More biased! Why? Because they worry these tools haven’t been proven fair, or that algorithms might unfairly screen them out with no appeal. They are comfortable using AI for things like writing a résumé or prepping for interviews (things that help them present themselves), but they’re not so comfortable with AI being the ultimate decider of their fate.
Richard Wahlquist, the CEO of ASA, summed it up nicely when he commented on that survey. He said (and I’m paraphrasing): Job seekers might use AI in their job hunt, but that doesn’t mean they trust AI to make fair hiring decisions. Exactly. Candidates still want a human touch at critical moments – someone to actually read their cover letter, someone to have a real conversation with, someone to explain a rejection or offer feedback if they fall short.
The best hiring experiences usually involve technology smoothing out the rough edges (no one is nostalgic for the days of faxing resumes or playing phone tag for interview scheduling), but also a genuine human connection where it counts. A chatbot can send updates and answer FAQs at 2 AM – great. An AI can test some skills impartially – cool. But at the end of the day, many candidates cross their fingers hoping to meet a competent, empathetic human who can recognize their unique value beyond what any algorithm can see.
So let’s not misconstrue candidate preferences. They don’t want incompetent people running the show – absolutely. But they also aren’t clamoring for an all-powerful AI overlord to judge them. They want competence. They want fairness, speed, and clarity. That can and should come from well-trained people using well-designed AI tools in tandem. When candidates encounter that combination – say, a quick initial AI assessment followed by a thoughtful conversation with a sharp hiring manager – they report some of the highest satisfaction levels. Because they get efficiency and personalization.
In short, job seekers aren’t asking us to choose between humans or AI. They’re asking us to do better with whichever methods we use. The false choice disappears when you focus on what they really value: a hiring process that feels fair, professional, and human-centric (even if tech-enabled).
Stop the Gimmicks – Invest in Your People and Your Tech
The notion that AI hiring is automatically preferable to people hiring is a marketing gimmick, plain and simple. It’s a false choice designed to make a sales pitch easier. But falling for it would be a mistake for employers and job seekers alike. It distracts from the real work that needs doing: improving how we hire, period.
If you’re an employer, the next time a vendor tries the line “our AI is better than your people,” take a step back and consider what that implies. If your people truly are that bad, fix that first. Why have you allowed incompetent hiring practices to persist? Why not train your team, update your processes, or bring in better talent acquisition professionals? Those improvements will pay dividends across all aspects of your business, not just hiring. And guess what – those improvements will also make any technology you adopt far more effective.
Likewise, if a vendor boasts that candidates love their AI system, ask for the context. Do candidates love it in lieu of a terrible alternative, or do they truly love it as part of a well-rounded experience? Ask how their tool works with human recruiters and managers, not just instead of them. The best solutions will emphasize how AI can take away drudgery and reduce bias while still keeping humans in the loop for judgment, empathy, and relationship-building. Be very wary of any “solution” that suggests you can abdicate all responsibility to an algorithm. That’s not a solution; that’s a liability waiting to happen.
For the job seekers out there (especially those early in their careers, whom I’ve spent my own career advocating for): keep voicing what matters to you. If you’ve been burned by awful hiring experiences, it’s okay to say you’d prefer an impartial AI over a biased interviewer – that sends a message that employers need to hear. But also know that the ideal scenario is not one where you never talk to a human. It’s one where you engage with competent, respectful humans backed by efficient technology. Hold employers to that standard. The more candidates push for fairness and transparency, the more employers will realize that it’s not about people vs. technology, but about making both better.
In the end, the “AI or people” debate in hiring is a false dilemma we must move past. Smart organizations already have. They know it’s AI and people, together, each doing what they do best. They know that investing in employees – training them, empowering them – goes hand in hand with investing in innovation and tools. One without the other just doesn’t work.
No, job seekers shouldn’t have to choose between an algorithm and an idiot. And in a well-run company, they won’t have to. They’ll get skilled recruiters using advanced AI platforms, fair assessments with personal feedback, timely automated updates, plus genuine human conversations. That’s the vision we should be striving for in hiring.
So let’s stop indulging the false choice. Instead of asking “AI or humans?”, ask “How can we ensure our humans are competent and our AI is well-implemented?” Ask “What investments in training or technology will make the biggest impact on fairness and efficiency?” Those are the real questions that lead to better hires and happier candidates.
At the end of the day, technology can augment human potential, not replace it. And human talent can leverage technology, not fear it. When we get both sides of that equation right, everybody wins – employers, employees, and yes, those vendors too (the ones with solutions that genuinely add value rather than peddle false dilemmas).
The future of hiring isn’t about choosing robots over people or vice versa. It’s about choosing to invest in excellence, wherever it comes from. Good people and good AI aren’t rivals; they’re the dream team. And any vendor who tells you otherwise is selling a fantasy.