The Use of Artificial Intelligence to Minimize HR Bias (part 1)

Note: this is part one of two posts on hose artificial intelligence can be used to reduce bias in various Human Resources processes. This article was co-written by Bernadette Smith and Rhodes Perry.

When a workforce is diverse, that talent has a broader understanding of the needs of their diverse clients. Naturally, when an organization better understands the needs of its target market, they can better innovate their products and services – and that leads to an increase in revenue.

According to management consulting company McKinsey & Co, companies that exhibit gender and ethnic diversity are, respectively, 15 percent and 35 percent more likely to perform better than those that don't. Their research shows that organizations with more racial and gender diversity also have better sales revenue, more customers, and higher profits.

Unfortunately, all of us, even the most well-meaning people in Human Resources, are guilty of bias, which negatively affects the creation of a diverse and inclusive workforce. You may have heard the story of a man named Jose, who was having no luck on his job applications. He began applying with the name “Joe” instead, and suddenly started receiving calls.

This bias, called unconscious bias, is so subtle that most of us don’t notice it or catch ourselves. Here are some other common ways this can play out in HR:

•       Geography bias (eg: local job candidates receiving preference over non-local job candidates)

•       Gender bias (eg: women are given fewer opportunities than men if they have kids then but then are disliked when they are not seen as nurturing)

•       Appraisal bias (eg: when the manager compares an employee’s performance to other employees instead of the company standard)

•       Association bias (eg: favoring those who went to the same college, are members of the same organization or association, etc) 

The great news is that technology, specifically artificial intelligence (AI), offers clients solutions to minimize bias and therefore create a more diverse workforce – and as a result increase revenue. In fact, AI is currently being used within human resources processes to:

·      Set hiring priorities (eg: prioritize what positions need to be filled first)

·      Suggest hiring trends

·      Neutralize resume screening (eg: remove certain affiliations, remove geography)

·      Standardize job descriptions (eg: trigger alerts when gendered words such as the masculine word “competitive” are used in job descriptions)

·      Assess leaders and potential leaders (eg: identify employees for internal promotions)

·      Improve employee retention (eg: suggest which employees are retainable)

·      Standardize employee assessments (eg: customize and automate appraisal templates)

·      Synthesize performance review data (eg: suggest specific actions per employee as area for improvement)

·      Synthesize exit interview data and provide insights on why employees leave

While AI can help reduce unconscious bias and lead to a more diverse workforce, it’s not a panacea. Simply put, AI demands all of us humans to distill the data it uses in its analysis.

In its current form, AI is simply an extension of our existing culture, which is riddled with biases and stereotypes. This means, that as we program AI, and as AI learns from us through our words, data sets, and programming, we run the risk of having machine learning perpetuate our culture’s biases. For example, Google’s translation software converts gender-inclusive pronouns from several languages into male pronouns (he/him, his) when talking about medical doctors, and female pronouns (she, her, hers) when talking about nurses, perpetuating gender-based stereotypes

This built-in bias can show up in a number of ways in AI HR technology. For example, if only one employee is providing evaluation data that is used to set the standard for performance reviews, then there are not enough perspectives to establish balance and generate non-biased datasets. When a team of people is conducting interviews, they often do not use standardized questions. This can also skew the datasets because there is not enough consistency in the responses to generate unbiased data.  

When the datasets have a low volume of responses, they also are inherently more biased because there aren’t as many varied possibilities. Even a company like Walmart, which hires over 1,000 people per day, doesn’t generate a massive supply of data. One thousand people per day is child’s play for machine learning and the results again, can perpetuate any biases that are unconsciously built into the company’s processes.

In part two, we’ll address solutions that can improve AI’s reliability in reducing bias.