Advice for Employers and Recruiters
10 things early career talent hate about your AI-powered hiring process
For many HR departments, AI-powered hiring systems are a godsend. They promise to filter through thousands of entry-level applications, identify the “gold” in a mountain of resumes, and save recruiters hundreds of hours. But for the people on the other side of the screen—the students, recent graduates, and early-career professionals—the experience is often far from efficient. It’s alienating.
Gen Z and younger Millennials prioritize transparency, authenticity, and “human-centric” workplaces, and so the rapid adoption of AI in recruiting has created a significant friction point. If your AI isn’t calibrated with the candidate experience in mind, you aren’t just filtering resumes; you are actively filtering out top-tier talent who would rather work for a competitor that treats them like a human being.
Here are the 10 things early-career candidates hate most about AI-powered hiring systems, and how you can fix them.
1. The “Black Box” Mystery (Lack of Transparency)
One of the biggest frustrations for early-career job seekers is the total lack of transparency regarding how AI makes decisions. When a human recruiter rejects a candidate, there is at least the concept of a reason. With AI, candidates feel their fate is decided by a “black box” algorithm they don’t understand.
Why it hurts: Early-career candidates are often still learning how to position themselves. When they are rejected instantly by an algorithm without knowing if it was because of their GPA, a missing keyword, or a specific assessment score, they feel helpless. This lack of clarity builds resentment toward your employer brand.
The Fix: Be transparent about the tools you use. Inform candidates at the start of the process that AI is being used to assist in the screening and explain what specific criteria (e.g., specific skills or certifications) the system is looking for.
2. The One-Way Video Interview (The “Uncanny Valley”)
Asynchronous video interviews (AVIs), where a candidate records answers to prompts without a human on the other end, are perhaps the most loathed part of the modern hiring stack. Having to talk to a blank screen while a timer counts down feels unnatural and clinical.
Why it hurts: It’s an “all-risk, no-reward” scenario for the candidate. They are being judged on their facial expressions and tone by an AI, but they receive zero social cues or feedback in return. For a generation that values real connection, this feels like an interrogation by a robot.
The Fix: Limit the use of one-way videos to the very earliest stages and ensure the “questions” are asked via video by a real team member to add a human face to the prompt. Better yet, move to live (but recorded) interviews as soon as possible.
3. Algorithmic Bias and the “Cookie Cutter” Filter
Students and recent grads are acutely aware of the potential for algorithmic bias. They fear that AI systems, trained on “historical data” (which often means a workforce that was less diverse), will naturally favor candidates from certain zip codes, prestigious universities, or specific demographic backgrounds.
Why it hurts: Early-career talent often brings diverse perspectives and non-traditional backgrounds. If an AI is programmed to look for the “ideal” candidate based on who succeeded in your company 10 years ago, it will likely filter out the very innovators and diverse voices you claim to want.
The Fix: Regularly audit your AI tools for bias. Ensure your “ideal candidate” profile isn’t just a mirror of your current leadership, but a reflection of the skills needed for the future.
4. Gamified Assessments That Feel Demeaning
Many AI hiring suites replace traditional skills tests with “brain games”—puzzles designed to measure cognitive ability, risk appetite, or memory. While recruiters see data, candidates often see a waste of time that feels disconnected from the actual job.
Why it hurts: A computer science grad spent four years mastering complex code; being asked to play a “memory match” game to prove their worth feels patronizing. If the game doesn’t clearly relate to the role, it comes across as “jumping through hoops” for the sake of the algorithm.
The Fix: Only use assessments that are directly relevant to the role. If you use gamified tools, explain why—e.g., “This helps us understand your natural problem-solving style in a low-stress environment.”
5. The “Keyword Optimization” Arms Race
Candidates hate that they have to write for a machine rather than a person. This leads to “keyword stuffing” where candidates try to guess the exact phrases the AI wants to see, often at the expense of telling their actual story.
Why it hurts: It rewards those who are good at “gaming the system” rather than those who are good at the job. Early-career professionals who might have incredible potential but haven’t mastered the art of SEO-optimizing their resume are often discarded by the system before a human ever sees them.
The Fix: Use AI tools that prioritize “skills-based” matching and “intent” rather than strict keyword matching. Encourage candidates to use natural language in their applications.
6. Technical Glitches and Digital Inequality
AI systems are only as good as the candidate’s internet connection and hardware. If a candidate’s Wi-Fi drops during an AI-monitored assessment, or if the AI’s facial recognition struggles with certain lighting or skin tones, the candidate is often the one penalized.
Why it hurts: This creates a barrier for candidates from lower socioeconomic backgrounds who may not have the latest MacBook or high-speed fiber internet. It turns a “meritocratic” process into a “technological” one.
The Fix: Always provide a “Request Technical Assistance” or “Retry” option that is managed by a human. Ensure your AI tools are optimized for mobile devices and low-bandwidth situations.
7. Ghosting by Algorithm (The Automated Rejection)
There is a special kind of sting associated with receiving a rejection email 0.4 seconds after hitting “Submit.” It tells the candidate that no human ever looked at their effort, and it leaves them with no path for feedback.
Why it hurts: Early-career candidates put hours into their applications. When a bot rejects them instantly, it feels like their hard-earned degree and internships were dismissed by a line of code. Even worse is “AI Ghosting,” where the system simply stops communicating if you don’t hit a certain score.
The Fix: Delay rejection emails by 24 hours to give the appearance of human review, and provide at least one or two points of automated feedback (e.g., “We are looking for more experience with Python than your profile currently shows”).
8. The Death of “Soft Skills” and Personality
AI is great at measuring “hard” data points—years of experience, software proficiency, GPA. It is notoriously bad at measuring empathy, resilience, curiosity, and “culture add.”
Why it hurts: For early-career talent, “potential” is their greatest asset. They don’t have 10 years of experience to point to, so they rely on their personality and soft skills to win the job. When AI acts as the primary gatekeeper, those intangible qualities are often ignored.
The Fix: Use AI for the initial “minimum requirements” sweep, but move to human-led interactions as quickly as possible to evaluate the “human” side of the candidate.
9. Privacy and Data Surveillance Concerns
Gen Z is the most “online” generation, but they are also the most concerned about data privacy. They hate not knowing where their video recordings, assessment data, and personal information are being stored or if they are being used to train other models without their consent.
Why it hurts: It creates a lack of trust from day one. If a candidate feels “watched” or “data-mined” during the interview, they will carry that skepticism into the job—or simply opt out of the process entirely.
The Fix: Provide a clear, plain-English data privacy policy. Tell them exactly how long you keep their data and give them the option to have it deleted after the hiring cycle ends.
10. Over-reliance on Past Data vs. Future Potential
AI models are inherently backward-looking; they analyze what has worked in the past to predict the future. However, the world of work is changing rapidly. What worked for a marketing manager in 2015 isn’t necessarily what will work in 2026.
Why it hurts: Early-career talent represents the future. By using AI to find “more of the same,” employers are effectively shutting the door on the very people who can help them pivot and grow in a changing economy.
The Fix: Set your AI parameters to be “inclusive” rather than “exclusive.” Instead of telling the AI to “find me another person like [Top Employee],” tell it to “find me someone with these core competencies who shows a high capacity for learning.”
Conclusion: Bringing the Human Back to HR
AI is a tool, not a replacement for judgment. For employers looking to attract the brightest young minds, the goal should be “Augmented Recruiting,” not “Automated Recruiting.”
If you use AI to handle the paperwork so your recruiters have more time to actually talk to candidates, you win. If you use AI to replace the conversation entirely, you lose. Early-career talent isn’t looking for a perfect, frictionless, robotic experience—they are looking for a career, a mentor, and a place where they belong. Make sure your hiring system reflects that.
Next Steps:
Over the coming days, we will be diving deep into each of these 10 points with a dedicated article for each, providing deeper data and actionable strategies for your team.