A New Standard for Ethical Hiring: Combining Human Judgement with AI Fairness Tools

Published 06/17/2025
Share this on:

The ability of AI-powered systems to make decisions based on complicated sets of data has introduced speed and efficiency to hiring, the loan approval process, fraud detection, and more. But has it enhanced accuracy? Many would argue no, especially those impacted by biased hiring algorithms, including employees and employers.

One of the most elusive conundrums is how to fight age-based bias by AI-powered hiring systems. For the 2024 IEEE International Conference on Big Data and Smart Computing (BigComp), the University of Northern Colorado’s Christopher G. Harris recently took on the task of studying two potential solutions: human-in-the-loop (HITL) systems and AI fairness toolkits. By examining how each performs, both in isolation and together, Harris provides recommendations regarding how organizations can use them to reduce age-based bias and implement ethical AI hiring practices.

How AI Algorithms Introduce Age Bias in the Hiring Process


AI algorithms introduce age bias in the hiring process because of the data they’re trained on and the keywords they look for.

For example, historical hiring data is often based on hiring practices that were made by biased humans. A hiring manager may see that someone graduated from university in 1997 and immediately start looking for age-based weaknesses in their CV. These kinds of biased decisions get included in the training data, leading to biased AI algorithms.

There’s also the issue of training data not including enough older applicants. If an algorithm gets trained based on data from mostly younger applicants, it may be more likely to recommend interviewing applicants with similar attributes.

In addition, keywords and skill descriptions are problematic because an AI algorithm may look for specific phrases, particularly those used by younger applicants, especially in a CV used to apply for a tech job. The absence of these phrases doesn’t mean the candidate is less qualified, but the algorithm may make that assumption.

Two Ways to Fight Bias


Human in the loop (HITL) and AI fairness tools are two very different methods that reduce the risk of an algorithm making a biased decision.

Human in the Loop (HITL)

HITL systems include human reviewers in the decision-making process. Using their oversight and expertise, these reviewers can recognize biases that AI may fail to identify. HITL is especially useful when hiring for specific industry domains because the humans involved understand nuances that can indicate biased AI-generated decisions.

AI Fairness Tools

AI fairness tools are software used to detect and mitigate bias. They analyze the algorithms themselves, as well as the data and decision processes they produce, to identify bias. Using an AI fairness toolkit is often a good option because it doesn’t depend on domain-specific knowledge, such as that a human would need to have, to identify biases in algorithms.

The use of HITL and AI fairness tools arose in response to AI algorithms demonstrating bias, particularly as they guided hiring decisions. In 2015, Amazon’s recruiting tool, for example, showed a preference for male candidates. As Amazon’s algorithmic system analyzed resumes, it gave lower scores to those from all-women colleges or with the phrase “women’s chess club captain.” This was because there were more men applying for tech positions than women, so the system presumed males were a better fit than their female counterparts. [1]

HITL and AI fairness tools are designed to prevent AI algorithms from introducing gender, age, race, and other kinds of biases into hiring processes.

The Solution: Combining HITL and AI Fairness Tools


By comparing the performance of HITL and AI fairness tools, Harris concluded that the best way to reduce the risk of age bias in AI hiring algorithms is to use them in combination with each other. Determining the effectiveness of each method involved using bias evaluation metrics, including statistical parity difference and disparate impact.

Harris also used AI fairness toolkits and HITL systems by themselves and analyzed how they reduced age bias. He then used them together, which produced the most compelling results. By using these in conjunction, hiring teams can use AI more ethically, increasing the chances of hiring the best candidate without unfairly eliminating qualified individuals.

For HR teams, this approach offers a practical way to reduce bias while ensuring top candidates aren’t unfairly screened out. To explore how to apply these findings and build a fairer, smarter hiring process, read the full paper now.

References:

[1] J. Dastin, “Insight: Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Oct. 10, 2018. Retrieved from https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/

Download “Combining Human-in-the-Loop Systems and AI Fairness Toolkits to Reduce Age Bias in AI Job Hiring Algorithms” Article

  • This field is for validation purposes and should be left unchanged.