When new technology comes along, it’s important to think about the ethical implications, especially when it comes to things like fairness, transparency, and bias.

Governments and regulators have a big part to play in making sure that AI-powered recruiting tools are designed responsibly and that they prioritize fairness, diversity, equity, and inclusion.

In other words, we need to make sure that these tools don’t discriminate and give everyone a fair shot at getting a job.


Short on time? Here is a quick summary of the full article

As AI continues to transform recruitment, organizations must address ethical concerns such as fairness, transparency, and bias. While AI enhances efficiency and improves candidate experiences, it also raises challenges—such as algorithmic bias and the risk of making hiring processes impersonal.

Key Challenges and Learnings:

  • AI’s Limitations in Hiring: Although 67% of talent professionals see AI adoption as a key hiring trend for 2025, concerns remain about its accuracy, bias, and ability to fairly assess candidates.
  • Amazon’s Failed AI Hiring Model: Amazon’s early AI recruitment tool was scrapped after it was found to discriminate against women, highlighting the risks of biased training data.
  • The Need for Transparency: AI-driven hiring must offer clear explanations for its decisions to ensure fairness and accountability.

Regulatory Landscape:

  • Europe’s AI Regulations: GDPR enforces the “right to explanation,” requiring transparency in AI-driven decision-making.
  • New York City’s AI Hiring Law: Companies using AI for hiring must conduct annual bias audits, with penalties for undisclosed or biased AI use.
  • U.S. Equal Employment Opportunity Commission (EEOC) Guidelines: Recommends bias monitoring, algorithmic transparency, and compliance with anti-discrimination laws.

AI as a Tool for DEI (Diversity, Equity, and Inclusion):

  • AI can promote diversity by ensuring inclusive job descriptions and reducing hiring biases.
  • Companies prioritizing DEI are more attractive to younger job seekers, such as Gen Z, who value workplace inclusivity.

Conclusion:

The future of AI in recruitment depends on balancing automation with ethical responsibility. Transparent, well-regulated AI can improve hiring while promoting inclusivity and fairness. Businesses that embrace this approach will not only reduce legal risks but also enhance candidate trust and experience.

Interested? Keep on reading for the full article below


AI’s Growing Pains in Recruiting: Reality Kicks In

AI has solid potential, but it’s not the one-stop shop many expected.

AI can really boost productivity and make recruitment way easier. But the tech we have now isn’t optimized for the tough stuff, like assessing soft skills or making sure we are not discriminating.

To help identify key business drivers heading into 2025, Korn Ferry asked over 400 talent professionals around the world to share their insights for its latest Talent Acquisition Trends report.

And increased AI usage came out on top: 67% named this the top talent trend of 2025. However, AI is proving that it’s not quite the game changer companies hoped for, and taking humans out of hiring carries concerns.

Four in 10 respondents are worried that AI makes the recruitment process impersonal, and 1 in 4 fear that algorithmic bias leads to unfair hiring decisions.

At the same time, companies that thought AI would be a game changer for TA are now concerned about its inaccuracies.

40% of talent specialists worry that too much AI in recruitment could make the process impersonal, causing them to miss out on top candidates.

Another 25% are concerned about algorithmic bias—where biased training data leads to unfair outcomes.

Despite these challenges, 67% of the survey respondents see increased AI usage as a top talent acquisition trend for 2025.

If mismanaged, AI can undermine the hiring process, but when used strategically and appropriately, it can add real value to the experience—for candidates, recruiters, and hiring managers.

It seems that in 2025, more employers will use AI to improve the candidate journey the centerpiece of the recruiting processes. AI is not just about automating hiring. It’s about making the recruiting experience smoother and more equitable for the candidates. We already looked into how candidates feel about AI in recruiting.

Now let’s take a closer look at how the government is responding to these technological innovations in recruiting by starting with some public AI usage backslashes that led the government to respond.


Lessons Learned: Public backslashes of AI in Recruiting and How to Avoid Them

Amazon Recruiting Algorithm

Amazon has been trying to use AI in recruiting since 2015. They built a team of 12 engineers to create algorithms that wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.

They quickly noticed that the algorithm had inherited biases and was negatively penalizing women applicants because of the male dominance in the training data.

They edited the algorithm to not take into account gender differences but there was no guarantee that the algorithm would not pick on other variables that are still biased.

Eventually, they dismantled the team and used a watered-down version of the algorithm that could just provide recommendations to the recruiters and not automatically sort the candidates.

My Expert Opinion on Why This Happened

Amazon used existing data of their employees to train their algorithms.

When algorithms are created this way they learn to mimic the data and assign importance to specific variables contained in the data.

Algorithms do not care about gender per se, but given prior data of highly successful male employees it will learn that the gender variable is an important one.

Moreover, the algorithms they used to build this tool were a black box so it couldn’t provide explanations of why it made the choices it did.

Possible Solutions And How To Avoid Such A Reputational Risk In Your Company

The only acceptable path forward for AI in recruiting is transparent algorithms that provide a succinct explanation of why they scored the candidate the way they did.

Ideally, algorithms should be able to provide this feedback back to the candidate. Transparency is the only way forward for recruiting algorithms.

If an algorithm provides just a score without any logic behind it, there will always be a danger of being discriminatory.

Fortunately, AI today is able to provide the logic behind its scoring. When you decide to deploy an algorithm in your recruiting ops always demand that this algorithm is transparent and provides the logic behind its results.

Algorithmic transparency is not a feature, it is a requirement.

It will ensure that you will not be missing on great candidates and minimize the risk of being discriminatory in your practices which can have a huge backslash.

Candidates also will have a great personalized experience when applying for a job when the algorithms deployed are transparent. GenAI has the power to provide the logic behind its decisions today.

We, the recruiters need to be more mindful and require transparency on the algorithms we use. It’s the only way forward.

I wouldn’t work with a nontransparent algorithm for recruiting ops. Way too risky for my business

Another Public PR Backslash: A Lawsuit For Algorithmic Discrimination

An African American plaintiff has alleged that a company’s systems prevented him from being hired on the basis of his race, age, and mental health.

Plaintiff Derek Mobley, an African American male over the age of 40 filed a lawsuit in a California district court.

He stated that since 2018, he has been rejected in 80-100 job applications to companies that he believes utilize a company’s screening tool for hiring purposes.

The company replied that the lawsuit was without merit. “We are committed to trustworthy AI and act responsibly and transparently in the design and delivery of our AI solutions to support equitable recommendations. We engage in a risk-based review process throughout our product lifecycle to help mitigate any unintended consequences, as well as extensive legal reviews to help ensure compliance with regulations.”

My Expert Opinion on This Backslash

There is always a risk of getting a lawsuit for discriminatory hiring practices especially when there is automation involved.

The best way to deal with this potential backslash is to have full transparency on the way the algorithms are built.

I would go even a step further. Candidates would love to have an algorithm prescreening their application before even applying and receiving individual feedback on their CV.

We cannot provide this feedback manually today because recruiters do not have time to provide individual feedback to each applicant.

But it is possible with automation. I believe this is a great opportunity for brand elevation to treat candidates with this level of transparency and make the whole process of applying for jobs more individualized and humane.


The Regulatory Environment Of AI In Recruiting

IN EUROPE, THE “RIGHT TO EXPLANATION” IS ALREADY A REQUIREMENT FOR ANY AI.

One of the key aspects of these EU regulations is the emphasis on transparency and accountability. Recruiting AIs must be designed to allow candidates to understand how decisions are made.

For instance, the EU’s General Data Protection Regulation (GDPR) enforces the “right to explanation,” requiring organizations to provide insights into automated decision-making processes.

This regulation helps prevent the “black-box” effect of AI, where users are left in the dark about how decisions are reached, thereby ensuring a more transparent hiring process. The regulatory landscape encourages adopting practices such as algorithmic auditing and bias testing. By regularly auditing the AI models for bias, companies can detect and correct unfair patterns in their hiring decisions.

In the US regulators also wake up to the potential of AIs harming the recruiting processes.

For instance, New York City has introduced a regulation requiring bias audits for AI tools used in recruitment, compelling companies to demonstrate that their algorithms do not disproportionately impact marginalized groups.

The New York City Council voted 38-4 on November 10, 2021, to pass a bill that would require hiring vendors to conduct annual bias audits of artificial intelligence (AI) use in the city’s processes and tools.

Companies using AI-generated resources will be responsible for disclosing to job applicants how the technology was used in the hiring process and must allow candidates options for alternative approaches such as having a person process their application instead.

The city of New York will impose fines for undisclosed or biased AI use, charging up to $1,500 per violation on employers and vendors. Lapsing into law without outgoing Mayor DeBlasio’s signature, the legislation is already effect since 2023.

It is a telling move in how the government has started to crack down on discriminatory/black-boxy AI use in hiring processes and foreshadows what other cities may do to combat AI-generated bias and discrimination.

A great technical article from the US Equal Employment Opportunity Commission sets the tone for how recruiting algorithms should be created and used. To ensure that software, algorithms, and artificial intelligence (AI) used in employment decisions comply with Title VII of the Civil Rights Act of 1964, the U.S. Equal Employment Opportunity Commission (EEOC) recommends the following monitoring practices:

1. Regular Adverse Impact Analysis: Consistently assess whether the use of these tools results in disproportionately negative effects on protected groups, such as those defined by race, color, religion, sex, or national origin. This involves statistical evaluations to identify any disparities in outcomes.

2. Validation of Selection Procedures: Ensure that the tools are valid predictors of job performance and are necessary for the business. This means demonstrating that the selection procedures are job-related and consistent with business necessity.

3. Transparency and Documentation: Maintain clear documentation of how these tools are used, including the data inputs, decision-making processes, and outcomes. Transparency aids in identifying potential biases and facilitates compliance reviews.

4. Periodic Reviews and Updates: Regularly review and update the algorithms to address any identified biases or changes in job requirements. This ongoing process helps in mitigating unintended discriminatory effects.

5. Training and Awareness: Educate HR personnel and decision-makers about the potential biases associated with AI and algorithmic tools. Training ensures that those involved in employment decisions understand the importance of fair and unbiased tool usage.

By implementing these practices, employers can better align their use of AI and algorithmic tools with federal equal employment opportunity laws, thereby promoting fair and unbiased employment decisions.


How AI In Recruiting Can Help Promote D.E.I.

D.E.I. is and will be a priority for the future of recruiting.

A majority of HR decision-makers (75%) stated that their company would prioritize diversity hiring, according to the Jobvite data.

In EY’s Belonging Barometer 3.0, 63% of Gen-Z workers reported they would choose a company that prioritizes DEI over one that doesn’t. This figure is noteworthy as Zoomers will comprise 30% of the labor force by 2025.

Moreover, with the ascension of AI, DEI will play a vital role in mitigating biases in hiring algorithms. DEI should guide AI and AI should embrace DEI. DEI should provide design parameters for inclusive and transparent AIs in recruiting.

A well-regulated AI system can be a powerful tool for promoting D.E.I. in recruitment. Regulations support the development of AI models that help organizations identify and reduce disparities in hiring practices.

By using AI to analyze and adjust job descriptions, for example, companies can ensure that language is inclusive and does not discourage diverse candidates from applying.

Furthermore, regulatory frameworks advocate for inclusive AI design by requiring stakeholder input, particularly from underrepresented groups. By involving diverse voices in the development process, organizations can better anticipate potential biases and design more equitable systems.

This aligns with broader D.E.I. goals, as it leads to the creation of tools that consider the unique needs and experiences of all candidates.


Conclusion

By now we are well informed about the government’s perspective on recruiting systems using AI. We also have a good understanding of the regulatory environment and DEI practices.

In a future article, we will focus on the parameters that will help us deploy AI in our recruiting operations in a way that:

  1. Is effective and saves us time from manual boring tasks to give us more time with the candidates Safe means that our AI is transparent protecting our reputation, operations and gives us data to be ready for any government audit
  2. Inclusive so we can create a diverse workforce to make the DNA of our companies effective and resilient
  3. Provide candidates with great experience from the moment we reach out to them and throughout the interview and onboarding.

What are your thoughts on this article? Let me know in the comments!

Looking forward to connecting with you.

Alex Louizos

AI engineer, Recruiter


Help Us Shape the Future of Recruitment – We Need Your Input!

We’re building an innovative AI platform designed to give recruiters more time to focus on what truly matters—the human connection with candidates. It’s not just about efficiency; it’s about bringing the personal touch back to recruitment.

And here’s where you come in:

We’re looking for beta testers to help us refine this platform and make sure it delivers real value. It’s completely free, with no strings attached—just your honest feedback.

If you’re interested in transforming the way you recruit and being part of something exciting, we’d love to have you on board!

Reach out to me with the subject test and I will connect with you and onboard you in the platform immediately:

📩 Email: alex@manxmachina.com

💼 LinkedIn: Message me directly here

Let’s make recruitment better together! 🚀



Leave a Reply

Your email address will not be published.