Category
Artificial Intelligence

Bias in AI Recruiting: What to Do?

Discover the current ethical challenges that organizations face when adopting AI for hiring along with our recommended ways of overcoming them.

With the increasing potential of AI to impact many socially important areas such as criminal justice and recruiting, many questions regarding bias in its decision making have been raised by AI consultants. Numerous AI-based hiring tools are already used by companies to find talent faster and cheaper than ever before by quickly spotting hidden interconnections between a plethora of personality traits and linking the analysis to performance potential.

However, this has proven to be both a blessing and a curse, as we can’t always tell if the logic behind AI-generated decisions is ethical and justifiable. Let’s dive deeper into the current ethical problems of AI in recruiting and see what companies can do to mitigate them.

Numerous AI-based hiring tools are already used by companies to find talent faster and cheaper than ever before.
Tweet

Is It Even Lawful?

Hiring professionals’ increasing interest in AI stems from the long history of psychometric assessment. Essentially, it has been widely tested and proven that certain personality traits, cognitive abilities, and mental health conditions can predetermine an individual’s success in a particular role. However, in most countries, it’s unlawful to deny employment based on physical or mental disabilities unless it’s a very specific job that legally enforces the employer to test applicants for it. In the US, for example, the Americans with Disabilities Act forbids employers to ask candidates about their physical and mental states, as this informati­on is considered private.

With the emergence of AI, personality assessment has become as attainable as never before. Hirers can now understand a candidate’s suitability for a particular role on the most granular level by retrieving information from their social media accounts, facial expressions, and even voice.

For example, in their study published in the August 2020 edition of Journal of Personality, Dr. Kazuma Mori and Principle Investigator Masahiko Haruno used an ML algorithm to assess personality traits including socioeconomic status, bad habits, depression, and schizophrenia among other 24 attributes, based on people’s Twitter activity.

However, this is where problems arise. Is it ethical to analyze this data on such a deep level without applicants’ consent? Paradoxically, while every company knows that it can’t directly use private information to make hiring decisions, it’s not prohibited to use tools that can discern this data from publicly available information.

Social media data algorithm for hiring

To draw a parallel, let’s take a lie detector test as an example. Employers are prohibited from using such tools in the hiring process by law. However, let’s imagine that there is a superhuman who can accurately tell if someone is telling the truth or not. While it would be perfectly legal to use the superhuman’s abilities to make better hiring decisions, it essentially does exactly the same thing as the forbidden lie detector.

Most definitely, there are cases where companies would intentionally use candidates’ private information, which is unlawful, unethical, and can easily go unnoticed. However, there is the other side of the coin, where organizations are simply unaware that their tools use sensitive information as a basis for employment suggestions.

As you can see, it all comes down to specific regulations concerning the use of the new emerging technologies in HR management. Governments need to implement clear regulatory frameworks that will shine a new light on candidates’ and employees’ privacy in the context of new technologies. It’s hard to imagine how long it will take, as not only formulating these frameworks is particularly complex, but much of the needed research on this subject is currently missing. Moreover, the vendors who issue these AI tools will also need time to adjust their software in relation to new regulations. Until then, the confusion among companies that are trying to dive into the world of AI-enabled hiring tools is inevitable.

Our AI solutions are built with regulatory compliance at its core.
Discuss your project with Iflexion.

The Roots of Bias in AI

To understand the root cause of AI bias, we need to look under the hood of its algorithms. The majority of AI systems are fed with data related to people who succeeded in corresponding job roles. In very simple terms, AI compares the applicants’ profiles to the training dataset and suggests which candidates are most likely to succeed based on the matching percentages of the attributes.

You are probably seeing the catch here: the dataset, the integral part of any AI algorithm, is still selected by our indisputably biased human minds. When the choice of training data is straightforward and is based on definitive performance metrics, it can still be flawed and unrepresentative. In this case, the algorithms will only amplify the existing biases and continue to narrow the candidate pools.

ML bias programming

Unfortunately, this dataset problem comes up in almost every step of the hiring process. The most heavily marketed and easily implemented AI hiring tools help recruiters formulate job posting in a way that appeals to the desired target audience and eliminates the rest. However, most AI algorithms would simply reinforce pre-existing biases. For example, a recent study shows that targeted Facebook ads for supermarket cashier positions were predominantly shown to women in 85% of the cases. Again, AI is not an issue here as it simply mirrors our real-world stereotypes.

Amazon should probably take the prize for the most telling example of bias among AI-powered hiring tools. To put it shortly, the company found out that its recruiting algorithm for software developers was heavily discriminating against women. Being one of the largest companies in the world, Amazon’s confidence in the variability of its datasets shouldn’t be surprising. However, the widespread dominance of men in the IT industry isn’t novel as well.

To understand the root cause of AI bias, we need to look under the hood of its algorithms.
Tweet

What to Do about AI Bias in Hiring?

Now the grand question emerges: ‘What can we do about AI bias in hiring?’ While the answer will be inevitably multilayered, the most important thing to address here is that bias, in any form and context, is definitively our responsibility. Fine-tuning these cutting-edge technologies is a matter of fair competition, reduced discrimination, and, ultimately, a workplace that welcomes diversity and equality. Let’s see what steps organizations can make to maximize the potential of AI in recruiting.

Review the Training Data

In most cases, AI bias can be predicted by quickly evaluating the context and the training data. If you decide to use your own datasets to hire new software developers, and 95% of them are white males in their 20s, you will most likely discriminate against women and different minorities. While it’s usually much more complicated than that, it all comes down to a continuous assessment of your training data.

Audit Continuously

Next, set up technical tools that help diminish bias in AI. Currently, there are multiple established processes that help detect the root causes of bias, be it in the data or in the ML algorithm itself.

For example, IBM’s open-source AI Fairness 360 toolkit helps you evaluate bias in datasets, known ML models, and state-of-the-art algorithms. Google has issued a very elaborate list of practices that help companies establish concise frameworks that tackle everything from facial recognition software to NLP applications. Given the self-learning nature of AI algorithms, it’s crucial to make these audits systematic and involve third parties in the process.

Mitigating AI bias

Make It Transparent, Lawful, and Ethical

Well, any data collection practices require regulatory compliance and, as we are moving further, big data privacy risks are becoming more apparent.

While there are multiple regulatory frameworks including China’s Personal Information Security Specification and California Consumer Privacy Act, let’s focus on the most globally recognized data protection legislative act — the GDPR.

First of all, it’s critical to note that the GDPR is applicable not only to companies that are registered in the EU but to any organization that processes data of EU citizens.

Secondly, the GDPR explicitly states that automated decision making and profiling based on racial origin, political opinion, religion, trade union membership, health status, or sexual orientation is allowed only under specific conditions. The statement speaks for itself: your data scientists need to build discrimination-free models. However, from a technical perspective, it’s not enough to simply exclude the labels marking race or religious beliefs from the models, as the algorithm can link other unprotected attributes to these classifications.

AI models are famous for proxying features via multiple factors, like postal code, height, etc. When the data input is biased, the AI model will find a way to replicate the bias in the outcomes, even if the bias isn’t explicitly included in the variables of the model.
Roald Waaijer, Director Risk Advisory, Deloitte

Thirdly, the GDPR also requires companies to clearly explain the reasoning behind automated decision making. In relation to hiring, companies that are using AI tools are now obliged to explain why a candidate wasn’t chosen for a particular role. This issue is especially conflicting and relevant for AI-enabled recruiting, as the most advanced and implementation-worthy AI tools would struggle to explain the rationale of an individual decision.

However, it’s worth noting that the ‘right to explanation’ is mentioned only in Recital 71 of the GDPR, which means that this right is not yet legally binding, and is only set to provide guidance in case of operational regulatory ambiguity. While lawyers and privacy professionals continue to debate on this controversy, we suggest companies adopt currently available interpretation tools like LIME or Deloitte’s GlassBox for their recruiting processes. In fact, tackling the black-box AI problem is not just about regulatory compliance but also about realizing this technology’s full potential.

Make Human Supervision Mandatory

As controversial as it gets, you need to involve humans (species with strong tendencies towards prejudices) into AI-powered automated decision making (inherently biased systems that were developed by humans to mitigate bias). However, given the relative immaturity of AI systems, the human-in-the-loop approach is needed to ensure that obviously biased systems like the one in Amazon’s case can be shut down before the problems escalate. In any case, the symbiosis of human and artificial intelligence has proven to be more reliable in achieving tangible results.

In any case, the symbiosis of human and artificial intelligence has proven to be more reliable in achieving tangible results.
Tweet

Closing Thoughts

Regardless of how advanced HR tools and methods are, most organizations still rely on their gut feeling when assessing talent. It takes a few seconds to look at a resume for a recruiter to decide who will fit the workplace culture better. As a society, we’ve learned to either accept it or combat it with all the wrong methods. Diversifying for the sake of diversification and a polished public image has become a new standard. While unfortunate, there are really no clear-cut ways of eliminating bias from the human decision making completely.

With the introduction of AI into recruiting and its first few inevitably disastrous outcomes, media outlets were quick to capitalize on the opportunity for dystopian headlines. However, they also forgot that AI is usually a more controlled reflection of human behavior.

We are furious about machines opting for inequality as a trade-off for better performance, while our own decision making has been notoriously prejudiced for decades. The catch here, however, is that we are far more likely to train algorithms to be fair than to mitigate bias in our own human thinking. Our tendency to be biased towards anything is deeply rooted in our highly complex cognitive mechanisms. Simplifying day-to-day decision-making with the help of prejudices is what protects us from being overwhelmed from processing every bit of information, say, when making a deliberate conclusion about what restaurant for today’s lunch is better. In a nutshell, we are doomed to fail in understanding human brains, but we are well on our way to understanding — and controlling — AI reasoning.

With the introduction of AI, the previously unattainable notion of unbiased recruiting has suddenly appeared possible. While some cases of AI initiatives in hiring have already proven to be detrimental, it doesn’t mean that the ship now needs to be abandoned. Organizations need to approach this highly sensitive area of AI with a clear roadmap, while continuously assessing risks and calling out their own biases.

Content type
Blog
Reinvent recruiting with AI.
Our developers are up to the job.

WANT TO START A PROJECT?

It’s simple!

Attach file
Up to 5 attachments. File must be less than 5 MB.
By submitting this form I give my consent for Iflexion to process my personal data pursuant to Iflexion Privacy and Cookies Policy.