Advice for Employers and Recruiters
Garbage in, garbage out: The truth about AI powered hiring
Earlier today, Hung Lee interviewed on Recruiting Brainfood Ritu Mohanka and Bill Fischer, the CEO and CTO of Vonq, respectively. They discussed how AI-powered agents will change recruitment advertising, including whether and how employers advertise their openings on job boards such as College Recruiter.
I always, always, always learn new things when I watch Recruiting Brainfood. Sometimes, the subject matter is simply new to me, or I know very little about it. Sometimes I know a lot about it, but what I thought I knew wasn’t accurate. And sometimes what I knew was accurate, but no longer is. Today’s episode was no different.
I won’t get into a lengthy discussion about what was discussed, as Hung, Ritu, and Bill did a good job of that. But the conversation did get me to thinking, again, about an underlying problem that AI-powered hiring systems of all types face: inaccurate data. Any tech system that relies on data relies on that data to be accurate. When you put garbage into the system, you get garbage out. And, realistically, the world of recruitment advertising is stuffed full of garbage, both from the employer and the job seekers.
Every employer I talk to is getting pitched the same dream right now. Turn on an AI powered agent, feed it your open roles, and it will do the grunt work. It will find great people fast. It will screen fairly. It will cut cost. It will even keep candidates warm with friendly chatbot updates while your recruiters sleep. That dream is tempting, especially if your team is small and your hiring volume is huge.
But here is the problem. AI powered agents only work as well as the data they use. And in hiring, the data is often terrible.
That is not a philosophical complaint. It is a practical one. An agent cannot guess its way to a great match. It does not walk your halls, sit in on your standups, or feel the vibe of your team. It reads what you give it. If what you give it is generic, thin, or wrong, the agent will be generic, thin, or wrong right back.
We have been here before with older tech. Applicant tracking systems promised clean automation but ended up amplifying messy inputs. Programmatic job advertising promised perfect targeting but still needed a good job ad to work. AI takes those same old limitations and multiplies them. The better the engine, the bigger the penalty for low quality fuel.
Let me start with job postings, because that is where most hiring data begins.
Job postings are usually written for compliance, not clarity
Most job ads read like they were assembled by committee, which makes sense because they usually are. Someone in HR pulls a template. A hiring manager skims it between meetings. Legal wants to avoid risk, so more caveats get added. The post goes live, and nobody looks at it again unless a candidate asks a question that exposes a gap.
Worse, many job postings are barely edited copies of postings from other employers. You find a role that sounds close enough, swap out the company name, and call it a day. That shortcut saves time in the moment. Then it costs you weeks on the back end, because candidates do not actually know what they are applying for, and your screening tech does not know what to screen for.
A lot of job advertising language has even become recycled at scale. Researchers and career advisers have noted growing use of euphemisms and buzzwords that are easy for AI to auto generate but often hide the real nature of the job. Think of lines like “wear many hats,” “no two days are the same,” or “self starter.” Those phrases are not neutral. They often signal chaos, understaffing, or unclear priorities. If the ad is vague, candidates infer risk. And if the ad is vague because it was recycled, AI will treat it as truth anyway.
Candidates care about the day to day reality. What does a normal week look like. Who do I report to. How big is the team. What tools do we use. How much autonomy do I get. What happens when priorities change. Is this place more like a workshop, a lab, a call center, or a classroom. Those answers are not “nice to have.” They are the difference between a candidate who is excited and one who is nervous.
Employer branding research keeps saying the same thing in different ways. People want a real glimpse into culture, expectations, and purpose, not a corporate mission statement stapled to a task list.
Yet most job ads do not give that glimpse. They list requirements and duties, but they rarely describe the environment where those duties happen. That is like trying to sell a house by listing the square footage and the number of bedrooms, while refusing to show photos, neighborhood details, or even a price. You might get inquiries, but you will not get confident buyers.
Salary information is still missing in far too many postings
Nothing exposes the data quality problem faster than pay transparency. Even with growing pressure from state laws and candidate expectations, about half of online job ads still do not include pay information.
The New York Fed, using Lightcast posting data, found that pay info appeared in about 53 percent of US online postings since January 2024. That is a big jump from years ago, but it still means almost half of ads are silent on pay. The Indeed Hiring Lab saw similar numbers, with roughly 58 percent of ads listing pay as of September 2024.
From a candidate’s perspective, pay is not a side detail. It is one of the first filters. Candidate experience surveys show large shares of job seekers expect salary before applying and are turned off when it is missing.
From an AI agent’s perspective, missing pay is not just inconvenient. It breaks matching. If your system is trying to rank candidates by likelihood of interest, pay is one of the strongest signals. If your system is trying to suggest roles to candidates, pay is one of the strongest constraints. If half of your data does not include that field, the agent starts guessing. And when it guesses, it will guess wrong for a lot of people.
You might think, “Fine, the agent can ignore pay and focus on skills.” But candidates do not ignore pay. In practice, a pay blind agent tends to do two bad things at once. It sends you candidates who would never take the job at your budget, and it sends candidates jobs they will not click on because the budget is unclear. That is wasted time for everyone, and it makes the AI look dumb even if the underlying model is strong.
Most resumes are thin, vague, and often wrong
Now flip to the other side of the marketplace. Resumes and CVs are the second main data source for hiring agents. I wish I could say they are clean and structured. They are not.
A resume is usually a one or two page attempt to compress a messy real life career into bullet fragments. It is the kind of document people write under stress, often with very little coaching. Even strong candidates struggle to describe what they actually did in a clear way. Early career candidates struggle even more because they do not yet know what matters to employers.
So what do we get. We get start and end dates. Employer names. Job titles. One or two broad lines about responsibilities. Almost never do we get measurable outcomes. Almost never do we get honest notes about what the candidate liked or did not like. Almost never do we get a clear picture of how a person works day to day.
Sadly, the vast majority of resumes are so poorly written they seem to indicate that candidates don’t understand that a resume is not an alibi. These resumes show everyplace a candidate worked, including start and end dates, but not what they achieved there. What skills did they use? What did they like? What did they struggle with, or even not like? It is incredibly rare when a resume includes that kind of data, yet it is that data that hiring systems need in order to properly match you to a job.
Even when a candidate wants to be specific, the format of the resume fights them. Many templates reward generic phrases like “responsible for” or “worked on.” A person might have built a tool that saved their team ten hours per week, or managed a project that hit a tight deadline, and it will still show up as “assisted with project management.” We do not get signal. We get fog.
We are also entering an era where resumes are increasingly AI generated. Some candidates use those tools to improve clarity. Others use them to inflate. Employers are noticing. Surveys show many hiring managers are worried about authenticity and say they are more likely to reject resumes that look auto written and vague.
An AI powered agent cannot tell the difference between honest fog and dishonest fog unless you give it another data source. It might pick the best writer, not the best worker. It might reward keyword stuffing instead of real skill. That is not because the agent is evil. It is because the input is weak.
Both sides spin, so the data is biased before the model ever runs
There is one more layer here that matters a lot. Job postings and resumes are not just incomplete. They are biased by design.
Employers write ads to attract talent. Candidates write resumes to get interviews. Both sides feel pressure to present the best possible light. That is normal human behavior. But it means that the data a hiring agent reads is not a neutral description of reality. It is two marketing documents shouting at each other across a crowded room.
Employers oversell culture and growth. Candidates oversell skills and impact. Sometimes the oversell is small. Sometimes it is not.
The result is a marketplace with a trust gap. Some call it “career catfishing,” where workers feel the job they accepted was not the job they were sold. Monster found that a large majority of workers say they have experienced this kind of mismatch. That is a dramatic figure, and the source is not academic, so I would not hang your entire strategy on one headline. But the pattern is real. Candidates talk about it constantly. Recruiters see the fallout when new hires quit quickly because the role did not match the story.
When you feed spun data into an AI agent, the agent does what any system would do. It learns the spin. It amplifies the spin. Then both sides feel even less trust.
Why this matters more with AI agents than with humans
You might be thinking, “We have always had messy postings and messy resumes, and humans still hire.” True. Humans are good at patching gaps. A recruiter can read between the lines. A hiring manager can sense when a resume is light but the candidate is sharp. A candidate can ask follow up questions and feel a team out.
AI agents do not patch gaps the same way. They interpolate. They predict based on patterns. They treat what is common as what is correct. With clean data, that is powerful. With dirty data, it is dangerous.
If job ads are generic, the agent will match generically. If pay is missing, the agent will match blindly. If resumes are vague, the agent will match on proxies like school name, employer brand, or keyword density. That increases the odds of systematic errors and unfairness.
This is not theoretical. Candidates already say they are uneasy when hiring feels overly automated or impersonal, especially when the process is not transparent. If your agent keeps sending mismatched roles or rejecting good people because their resume does not fit the template, you do not just lose a hire. You lose reputation.
And once reputation goes, an agent cannot fix that either.
The hidden cost is not just bad hires, it is missed hires
When people talk about AI risk in hiring, they usually focus on false positives. The unqualified candidate who slips through. That is obvious and painful.
The bigger cost is false negatives. The great candidate who never shows up on your shortlist because their resume did not describe their work the way your model expects, or because your job ad did not describe the job the way they would have recognized.
Every bad data point is a missed connection. It is a qualified person who never applied because the ad did not tell them what they needed to know. It is a qualified person who applied but was ranked low because they did not use the right verbs. It is a qualified person who got an interview and walked away because pay or expectations were different than implied.
Multiply that across fifty roles, and you are not saving time. You are bleeding talent.
So what should employers do about it
The answer is not to ditch AI. The answer is to stop feeding AI garbage and expecting gourmet outcomes.
If you want AI powered agents to help you hire, you need to invest in the data layer first. Think of it the same way you think about CRM. A sales org cannot buy the best forecasting tool and then keep sloppy account data. The tool will not rescue them. Hiring is no different.
Here are the moves that matter most.
Start by treating every job posting as a product page, not a compliance form. A good product page tells you what it is, who it is for, what problem it solves, what it costs, and what it feels like to use. Your job ad should do the same.
That means writing job specific content that comes from the team doing the work. Not from a template. You can still use a template for structure, but the substance needs to be yours. Describe the actual projects this person will touch in their first three to six months. Describe how success will be measured. Describe the tools they will use. Describe the rhythm of the team. Describe what kind of person thrives there. If there are trade offs, say so. Candid ads attract serious applicants and reduce early turnover.
Include pay ranges in your job posting ads, every time, unless you are in the tiny minority of roles where pay truly depends on a later assessment. We are moving toward a legal world where this is required anyway, and the data already shows it is a recruiting advantage. Even if you are uncomfortable with internal equity questions, silence is not solving that. It is just pushing the cost onto recruiting and retention.
Make your pay data structured. Do not hide it in a paragraph. Put ranges in the fields your ATS and job boards use. AI agents can read text, but they perform better with clean fields.
Now look at resumes. You cannot control what every candidate writes, but you can control what you ask for and what you measure.
If you rely on resumes alone, you are choosing the noisiest data source in the stack. Add other signals that are harder to fake and easier to compare. Skills based assessments, work samples, short task simulations, and structured application questions all help. Many employers are already shifting this way because they know resumes are inconsistent.
You can also coach candidates through your process. If your application asks for a short example of recent work, not just a resume upload, you get richer data. If your recruiter screens include a consistent set of questions about outcomes and preferences, you get richer data. Your AI agent can then learn from those richer signals instead of guessing from thin resume lines.
Finally, build feedback loops. When hires succeed, capture why. When hires fail, capture why. Feed that back into your job ad language and your screening criteria. AI agents are good at pattern recognition, but only if you label the patterns with real outcomes.
The real opportunity here is trust
Employers who fix their data do not just make AI work better. They make hiring work better, period.
Clear job ads reduce drop off. Transparent pay reduces wasted interviews. Honest culture signals reduce quick quits. Richer candidate data reduces bias toward fancy resumes. All of that is true even if you never use an AI agent.
But if you do use one, the upside is huge. A well tuned agent on top of clean and candid data can help you move faster without moving blindly. It can surface candidates who are both qualified and genuinely interested. It can show candidates roles that match their skills and their lives. It can take busy work off your team while still respecting the human stakes.
That is the future worth building. Not the fantasy where a black box fixes a broken process, but the practical path where better inputs lead to better outputs.
AI in hiring is not magic. It is math. And math does not care how excited the vendor demo made you feel. If the data is vague, missing, or spun, the decisions will be vague, missing, or spun.
So before you buy the next shiny agent, take a hard look at what you are feeding it. Your job postings and your candidate data are not paperwork. They are the fuel. Upgrade the fuel, and then the engine can finally do what it was promised to do.