When new technology comes along, it’s important to think about the ethical implications, especially when it comes to things like fairness, transparency, and bias. Governments and regulators have a big part to play in making sure that AI-powered recruiting tools are designed responsibly and that they prioritize diversity, equity, and inclusion. In other words, we need to make sure that these tools don’t discriminate and give everyone a fair shot at getting a job.
This article is a chapter from my eBook AI in Recruiting: Separating Hype from Reality – A Practical Guide for Smarter Fairer Hiring. Feel free to download the whole eBook here.
AI’s Growing Pains: Reality Kicks In
AI has solid potential, but it’s not the one-stop-shop many expected. AI can really boost productivity and make recruitment much easier. But the tech we have now isn’t optimized for the tough stuff, like assessing soft skills or making sure we are not discriminating.
To help identify key business drivers heading into 2025, Korn Ferry asked over 400 talent professionals around the world to share their insights for its latest Talent Acquisition Trends report. Increased AI usage came out on top: 67% named this the top talent trend for 2025.
However, AI is proving that it’s not quite the game changer companies hoped for, and taking humans out of hiring carries concerns. Four in 10 respondents are worried that AI makes the recruitment process impersonal, and 1 in 4 fear that algorithmic bias leads to unfair hiring decisions.
Despite these challenges, 67% of the survey respondents see increased AI usage as a top talent acquisition trend for 2025. At the same time, companies that thought AI would be a game changer for TA are now concerned about its inaccuracies.
Indeed, 40% of talent specialists worry that too much AI in recruitment could make the process impersonal, causing them to miss out on top candidates. Another 25% are concerned about algorithmic bias—where biased training data leads to unfair outcomes.
If mismanaged, AI can undermine the hiring process, but when used strategically and appropriately, it can add real value to the experience—for candidates, recruiters, and hiring managers.
It seems that In 2025, more employers will use AI to improve the candidate journey the centerpiece of the recruiting processes. AI is not just about automating hiring. It’s about making the recruiting experience smoother and more equitable for the candidates.
We already looked into how candidates feel about AI in recruiting. Now let’s take a closer look at the government regulations of using AI in our hiring process.
LESSONS LEARNED: PUBLIC BACKSLASHES AND HOW TO AVOID THEM
Amazon Recruiting Algorithm Amazon has been trying to use AI in recruiting since 2015. They built a team of 12 engineers to create algorithms that wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.
They quickly noticed that the algorithm had inherited biases and was negatively penalizing women applicants because of the male dominance in the training data. They edited the algorithm to not take into account gender differences but there was no guarantee that the algorithm would not pick on other variables that are still biased.
Eventually, they dismantled the team and used a watered-down version of the algorithm that could just provide recommendations to the recruiters and not automatically sort the candidates.
EXPERT OPINION OF WHY THIS HAPPENED
Amazon used existing data of their employees to train their algorithms. When algorithms are created this way they learn to mimic the data and assign importance to specific variables contained in the data.
Algorithms do not care about gender per se, but given prior data of highly successful male employees it will learn that the gender variable is an important one. Moreover, the algorithms they used to build this tool were a black box so it couldn’t provide explanations of why it made the choices it did.
POSSIBLE SOLUTIONS AND HOW TO AVOID THIS RISK IN YOUR COMPANY
The only acceptable path forward for AI in recruiting is transparent algorithms that provide a succinct explanation of why they scored the candidate the way they did.
Ideally, algorithms should be able to provide this feedback back to the candidate. Transparency is the only way forward for recruiting algorithms. If an algorithm provides just a score without any logic behind it, there will always be a danger of being discriminatory.
Fortunately, AI today can provide the logic behind its scoring. When you decide to deploy an algorithm in your recruiting ops always demand that this algorithm is transparent and provides the logic behind its results.
Algorithmic transparency is not a feature, it is a requirement. It will ensure that you will not be missing on great candidates and minimize the risk of being discriminatory in your practices which can have a huge backslash.
Candidates also will have a great personalized experience when applying for a job when the algorithms deployed are transparent. GenAI has the power to provide the logic behind its decisions today.
We, the recruiters need to be more mindful and require transparency on the algorithms we use. I wouldn’t work with a nontransparent algorithm for recruiting ops. Way too risky for my business.
ANOTHER PUBLIC BACKSLASH: A LAWSUIT FOR ALGORITHMIC DISCRIMINATION
An African American plaintiff has alleged that a company’s systems prevented him from being hired on the basis of his race, age, and mental health.
Plaintiff Derek Mobley, an African American male over the age of 40, who also suffers from depression and anxiety, filed a lawsuit in a California district court. He stated that since 2018, he has been rejected in 80-100 job applications to companies that he believes utilize a company’s screening tool for hiring purposes.
The company replied that the lawsuit was without merit. “We are committed to trustworthy AI and act responsibly and transparently in the design and delivery of our AI solutions to support equitable recommendations.
“We engage in a risk-based review process throughout our product lifecycle to help mitigate any unintended consequences, as well as extensive legal reviews to help ensure compliance with regulations.”
EXPERT OPINION
There is always a risk of getting a lawsuit for discriminatory hiring practices especially when there is automation involved. The best way to deal with this potential backslash is to have full transparency on the way the algorithms are built.
I would go even a step further. Candidates would love to have an algorithm prescreening their application before even applying and receiving individual feedback on their CV. We cannot provide this feedback manually today because recruiters do not have time to provide individual feedback to each applicant.
There is a great opportunity for brand elevation to use AI and automatically provide feedback on the resume of the candidates, even before they apply, so we can help them bring their best values forward instead of a cold rejection email with no feedback
LinkedIn has rolled out some features like that to job pages e.g. “Am I a good fit”, and “help me improve my resume”. I believe all companies should create similar AI to help candidates get the best experience from the get-go.
The Regulatory Environment Of AI In Recruiting
IN EUROPE, THE “RIGHT TO EXPLANATION” IS ALREADY A REQUIREMENT FOR ANY AI.
One of the key aspects of these EU regulations is the emphasis on transparency and accountability. Recruiting AIs must be designed to allow candidates to understand how decisions are made.
For instance, the EU’s General Data Protection Regulation (GDPR) enforces the “right to explanation,” requiring organizations to provide insights into automated decision-making processes.
This regulation helps prevent the “black-box” effect of AI, where users are left in the dark about how decisions are reached, thereby ensuring a more transparent hiring process. The regulatory landscape encourages adopting practices such as algorithmic auditing and bias testing.
By regularly auditing the AI models for bias, companies can detect and correct unfair patterns in their hiring decisions. In the US regulators also wake up to the potential of AIs harming the recruiting processes.
For instance, New York City has introduced a regulation requiring bias audits for AI tools used in recruitment, compelling companies to demonstrate that their algorithms do not disproportionately impact marginalized groups.
The New York City Council voted 38-4 on November 10, 2021, to pass a bill that would require hiring vendors to conduct annual bias audits of artificial intelligence (AI) use in the city’s processes and tools.
Companies using AI-generated resources will be responsible for disclosing to job applicants how the technology was used in the hiring process and must allow candidates options for alternative approaches such as having a person process their application instead.
The city of New York will impose fines for undisclosed or biased AI use, charging up to $1,500 per violation on employers and vendors. Lapsing into law without outgoing Mayor DeBlasio’s signature, the legislation is already effect since 2023.
It is a telling move in how the government has started to crack down on discriminatory/black-boxy AI use in hiring processes and foreshadows what other cities may do to combat AI-generated bias and discrimination.
A great technical article from the US Equal Employment Opportunity Commission sets the tone for how recruiting algorithms should be created and used. To ensure that software, algorithms, and artificial intelligence (AI) used in employment decisions comply with Title VII of the Civil Rights Act of 1964, the U.S. Equal Employment Opportunity Commission (EEOC) recommends the following monitoring practices:
1. Regular Adverse Impact Analysis: Consistently assess whether the use of these tools results in disproportionately negative effects on protected groups, such as those defined by race, color, religion, sex, or national origin. This involves statistical evaluations to identify any disparities in outcomes.
2. Validation of Selection Procedures: Ensure that the tools are valid predictors of job performance and are necessary for the business. This means demonstrating that the selection procedures are job-related and consistent with business necessity.
3. Transparency and Documentation: Maintain clear documentation of how these tools are used, including the data inputs, decision-making processes, and outcomes. Transparency aids in identifying potential biases and facilitates compliance reviews.
4. Periodic Reviews and Updates: Regularly review and update the algorithms to address any identified biases or changes in job requirements. This ongoing process helps in mitigating unintended discriminatory effects.
5. Training and Awareness: Educate HR personnel and decision-makers about the potential biases associated with AI and algorithmic tools. Training ensures that those involved in employment decisions understand the importance of fair and unbiased tool usage.
By implementing these practices, employers can better align their use of AI and algorithmic tools with federal equal employment opportunity laws, thereby promoting fair and unbiased employment decisions.
HOW AI IN RECRUITING CAN HELP PROMOTE DIVERSITY, EQUITY, AND INCLUSION (D.E.I.)
D.E.I. is and will be a priority for the future of recruiting. A majority of HR decision-makers (75%) stated that their company would prioritize diversity hiring, according to the Jobvite data. In EY’s Belonging Barometer 3.0, 63% of Gen-Z workers reported they would choose a company that prioritizes DEI over one that doesn’t.
This figure is noteworthy as Zoomers will comprise 30% of the labor force by 2025. Moreover, with the ascension of AI, DEI will play a vital role in mitigating biases in hiring algorithms. DEI should guide AI and AI should embrace DEI.
DEI should provide design parameters for inclusive and transparent AIs in recruiting. A well-regulated AI system can be a powerful tool for promoting D.E.I. in recruitment. Regulations support the development of AI models that help organizations identify and reduce disparities in hiring practices.
By using AI to analyze and adjust job descriptions, for example, companies can ensure that language is inclusive and does not discourage diverse candidates from applying. Furthermore, regulatory frameworks advocate for inclusive AI design by requiring stakeholder input, particularly from underrepresented groups.
By involving diverse voices in the development process, organizations can better anticipate potential biases and design more equitable systems. This aligns with broader D.E.I. goals, as it leads to the creation of tools that consider the unique needs and experiences of all candidates.
Conclusion
By now we are well informed about the candidates’ perspective on recruiting systems using AI. We also have a good understanding of the regulatory environment and DEI practices.
In the next article, we will delve into the parameters that will help us deploy AI in our recruiting operations in a way that will be effective but also safe, well-regulated, inclusive, and give candidates a great experience.
Stay tuned for the next article…
A small favor to ask…
We are a team of AI engineers and we are in the process of starting a new company. We are connecting with recruiters to get feedback on whether our product offers any value to them in the talent acquisition and screening process.
If you have 10 minutes to help us it would be awesome. Please DM me or leave a comment or like below and I will reach out to you.
I promise to respect your time and provide only value to you.
Thank you
Alex
AI engineer, passionate about recruiting