Most recruiters would agree that AI can be a powerful ally in the hiring process. It can automate repetitive, time-consuming manual tasks. For example, it can scan and filter hundreds of applications in seconds, communicate with candidates and answer their queries, and even schedule interviews. It can also help optimize job descriptions and score and rank candidates based on predefined criteria.
AI can nurture decision-making with data. For example, video interview platforms use AI to analyze candidates’ responses — not only the content itself, but also the tone of voice, word choice, and microexpressions. These elements are then correlated with the traits of the company’s top-performing employees.
AI can also reduce bias, enable quicker feedback, and facilitate better communication, ultimately benefiting candidates.
All in all, AI can help enhance candidate matching, reduce time-to-hire, and free recruiters to dedicate more time to building relationships with prospects and refining hiring strategies. However, some of the time saved by AI must be spent on overseeing its use in the recruiting process. This will help prevent algorithmic bias, ensure transparency, and safeguard candidate privacy.
Let’s examine the risks associated with using AI in recruitment and how to mitigate them.
Risk: algorithmic bias
In theory, AI should beat humans in terms of impartiality. After all, it can help filter applications based solely on skill set and experience, rather than demographic parameters prone to discrimination. The “problem” with AI, however, is that… it learns from humans. So, it will be as biased as the data it is learning from.
Algorithmic bias is what happens when AI models reinforce biases present in the data they were trained on. A notable case was Amazon’s sexist algorithm: as it had been trained on resumes submitted over a decade, predominantly from men, it learned that men were more suitable for technical jobs, and consequently developed a preference for male candidates.
The fact that AI is subject to biases doesn’t mean it’s good for nothing. As the Harvard Business Review puts it, “If you don’t like what the AI is doing, you definitely won’t like what humans are doing because AI is purely learning from humans.” And as Maude Lavanchy, PhD in Economics, provocatively stated, even Amazon’s sexist hiring algorithm could still be better than a human, regardless of its flaws. Researchers have identified a phenomenon they named “algorithm aversion”: people will erroneously avoid algorithms after seeing them err. In other words, “people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake” — even when the algorithm performs better overall.
Interestingly, earlier in the process, when advertising jobs, there is another risk — one that arises not from the algorithm itself, but from how advertising platforms operate. Researchers have studied how a Facebook algorithm — that simply optimized cost-effectiveness — delivered ads promoting job opportunities in the Science, Technology, Engineering, and Mathematics (STEM) fields: “This ad was explicitly intended to be gender-neutral in its delivery. Empirically, however, fewer women saw the ad than men. This happened because younger women are a prized demographic and are more expensive to show ads to.” In this case, rather than algorithmic bias, the researchers concluded that the phenomenon was the spillover effect from the fierce competition for the coveted young female audience. Nevertheless, it shows that not even algorithmic transparency and gender neutrality will suffice in fixing unequal gender outcomes.
How to prevent algorithmic bias?
That being said, recruiters by no means should accept such distortions as inevitable. Bias negatively affects candidates, for sure, but it is also a liability for employers. If an algorithm filters out candidates from marginalized groups, this will lead to missed talent and even bring legal risks.
Luckily (or not), “it‘s easier to program bias out of a machine than out of a mind.” In the case of the machine, you can remove most of the bias by simply providing the model with clear instructions (roughly put, “disregard factors like gender, ethnicity, political beliefs, sexual orientation when evaluating candidates”). The model will follow your instructions, as it is a machine, unlike a biased human who may not be able to do the same.
When choosing an AI software, it’s essential to evaluate the vendor’s standards, how and when AI is utilized, and details about the quality of the model’s training data. You can also check whether the software has undergone auditing.
Additionally, there are tools to detect bias in recruitment processes. For instance, they might highlight job descriptions with gendered language or identify problems in the scores assigned to applicants.

Risk: lack of transparency
Another risk introduced by AI in recruitment is the lack of transparency. To be fair, the pitfall was already there before AI. However, what happens specifically with AI is that, sometimes, the decision-making process of an AI tool is opaque to the employers using it.
Closely tied to transparency is the concept of explainability: recruiters need to understand why an algorithm made a certain recommendation or rejection. Without explainability, it becomes difficult to justify hiring decisions — eroding trust and exposing organizations to legal risks.
How to mitigate the risk
When purchasing an AI screening tool, it’s essential to choose one that makes its decision-making processes transparent, presenting those insights in plain language that recruiters and hiring managers can easily understand.
As for candidates, while they obviously don’t need to know every technical detail, they do need enough clarity to feel confident in the process.
Kiran Aulakh, a technical and early careers recruiter and Talent Manager with previous roles at Michael Page, Sparta Global, and Teach First, underscored in an interview with TechTalents Insights how transparency strengthens trust and preserves candidate dignity: “People want to feel seen and understood, and there’s still some skepticism around whether a machine can fairly judge someone’s skills or potential. That’s why I always try to be transparent. If we’re using AI at a particular stage, I let candidates know why, and what happens next.”
Risk: privacy issues
AI tools often process sensitive personal information. Safeguarding this data is crucial to maintaining candidate trust and complying with data protection regulations such as the General Data Protection Regulation (GDPR).
GDPR also places restrictions on transferring personal data outside the EU, which is significant since many AI tools store or process data in the United States.
How to mitigate the risk
Ethical AI tools are already designed with regulations in mind concerning the collection, storage, and use of data. Nevertheless, it is essential to ask the vendor whether candidates’ personal data is used to train the model, and, if so, what measures are in place to safeguard their privacy. It is also important to check whether any personal data is transferred outside the EU and, if so, what safeguards are applied to ensure compliance with the GDPR.
Employers should clearly inform candidates about their practices regarding data collection, usage, and storage. They should also ensure personal data is retained only for as long as necessary.
Risk: over-reliance on AI
The risks listed above were somehow present — albeit in other forms — in the recruiting process before the advent of AI. The completely new risk on this list is, therefore, the over-reliance on AI.
While AI can support the recruiting process in many different aspects, it should not replace humans. The HR team should view the AI agent as a highly knowledgeable and efficient assistant, but should not hand over full responsibility to it. Relying too heavily on AI can lead to the issues above, as well as to a dehumanized hiring experience, where candidates feel that crucial decisions concerning them are being made by machines. This situation can negatively impact both the candidate experience and the employer brand. In contrast, an HR professional who effectively utilizes AI will significantly enhance their productivity while improving the candidate experience.
Again, AI tools are only as good as the information they are supplied with. For instance, think of an AI screening tool searching for specific keywords during the application process. An applicant who uses unusual synonyms for those keywords may be overlooked by a poorly trained AI, even if they are highly qualified (it should be noted that a well-trained AI may be more skilled than humans at recognizing synonyms). Conversely, some unqualified (but AI-savvy) candidates may include in their applications some keywords that do not reflect their actual experience, in an attempt to cheat the system and secure an interview. If no recruiter takes the time to review this process, the problem won’t be addressed, and the system will keep discarding qualified candidates and advancing unqualified ones to the next stage.
How to mitigate the risk
The ethical use of AI in recruitment requires continuous vigilance. AI should be audited periodically to ensure it continues to meet its intended purpose.
It’s important to regularly monitor the percentage of candidate dropouts, the stage of the process where dropouts occur most frequently, the recruitment outcomes and costs, and candidate feedback. This will help spot any problems in the process and provide insights into how to optimize each stage.
It’s also crucial to provide candidates with enough human touchpoints (with recruiters and hiring managers) during the process. No candidate appreciates a robotic experience. AI should complement, not replace, the human element in recruitment, enhancing the candidate experience — and safeguarding your employer brand.
Finally, ensure that recruiters refrain from using AI insights as the only decision-making input.
In an interview with TechTalents Insights, the CHRO of medtech company Kayentis, Fy Ravoajanahary, emphasized that certain dimensions of recruitment remain irreplaceably human: “The intuition, the feeling, the subtle cues that emerge in a conversation — all of these are hard to replicate, and they matter. Recruitment is not just about matching skills; it’s about sensing potential, alignment, and energy.”

Best practices
The best practices for applying AI in recruiting ultimately revolve around enhancing human expertise.

Eli Duane, co-founder of Nova — an AI co-pilot for recruiters that seamlessly integrates with Applicant Tracking Systems (ATS) such as Ashby, Teamtailor, Workable, and more —, reinforces the transformative role of AI in hiring: “Recruitment is cognitively demanding, and repetitive tasks increase the risk of unconscious bias,” he explains. “By letting AI handle the early, repetitive stages, recruiters win back time to focus on what really matters: engaging with candidates, sourcing rare skills, and building strong talent networks.”
He also shares a set of best practices, distilled from Nova’s client feedback: “Start with a fine-tuned hiring profile; AI can update outdated job descriptions and provide benchmarks, sourcing strategies, and interview guides tailored to the role. Use AI scoring consistently; ranking every applicant on the same criteria ensures fairness and avoids random or rushed shortlisting.” Eli also emphasizes the importance of automating administrative tasks: “Tasks like outreach, email copy, and scheduling should be AI-driven, freeing recruiters to focus on higher-value work.”
The most effective teams are using AI to enhance the human aspect of recruitment, rather than diminish it. “With AI taking care of the top of the funnel, recruiters can deepen the human side of hiring,” says Eli. He concludes: “Invest saved time into interviews. Candidates value meaningful conversations, which is where top talent is truly won.”

In the spirit of combining AI and human expertise effectively, the tech recruiting and freelance platform Jobshark has recently rolled out two new AI-powered tools: a job ad creator for employers, and a cover letter assistant for candidates. According to Anders Persson, CEO of Comstream — the company behind Jobshark — the new launches add a powerful complement to the human expertise of the platform’s recruiters: “With the new AI tools, we can help companies recruit even faster and more accurately — while our recruiters personally headhunt candidates and ensure a human touch at every step of the process,” he explains. “In other words, AI takes care of the groundwork, while our people oversee the process, provide judgment, market knowledge, and the ability to actively reach and engage candidates with the right opportunities.”
Practical steps
To combine AI and human expertise effectively, employers can follow a few practical steps. One approach is to establish guidelines for using AI in recruitment, and even set up a committee — a team including members from HR, legal, technology, and other relevant departments — to ensure adherence to these guidelines, overseeing the use of AI and resolving any ethical dilemmas that may arise. It’s also essential to train the talent acquisition team so they understand how AI outputs are generated, how to interpret them effectively, and how to prevent and spot biases.
Another recommendation is to start with limited-scale trials when adopting new AI tools, and to gather feedback from both candidates and recruiters. This approach helps employers nip problems in the bud before implementing AI tools on a larger scale.
Subscribe to our newsletter
Enjoying our content? Subscribe to the TechTalents Insights newsletter and get our best articles and interviews delivered directly to your inbox. Click here to join the community!