ARTIFICIAL INTELLIGENCE IN HR MANAGEMENT: WHY NOT JUST FLIP A COIN?

ARTIFICIAL INTELLIGENCE IN HR MANAGEMENT: WHY NOT JUST FLIP A COIN?

The speed at which the rhetoric around digital transformation in management has gone from Big Data (BD) to Machine Learning (ML) to Artificial Intelligence (AI) is staggering. However, the gap between rhetoric and reality remains wide: 41% of CEOs report that they are not at all prepared to make use of new data analytics tools, and only 4% say they are “to a large extent” prepared (IBM). In his recent publication, professor of management, Valery Yakubovich, together with Peter Cappelli and Prasanna Tambe from the Wharton School, identifies four challenges in using data science techniques in HR practices and proposes practical responses to these challenges.  

All That Glitters Is Not Gold

The promise of data analytics is attractive… and easier to apply to a field such as marketing, where answers to closed-end questions such as “what predicts who will buy a product ?” and “how will X affect sales ?” are sought. However, marketing is not HR. One can imagine how it is more acceptable to let computers interact with anonymous customers than with employees whom managers know personally.

Important and nuanced questions such as “what constitutes a good employee?” persist, adding further complexity to the matter. Moreover, the data sets in HR are tiny compared to the data on purchases by customers, and data science techniques perform poorly when it comes to relatively rare outcomes. Firing someone for poor performance is one example of a rarely-observed outcome, which is rather surprising, given the serious consequences for individuals and society.

How To…

A) Generating Data

Do you know for sure what constitutes a good employee? You do not. Neither do I. No one does. Job requirements are broad and there are biases when assessing individual performance. Moreover, we do not work alone but in a complex interdependent ecosystem. Therefore, do not seek perfect measures, they do not exist, but choose instead reasonable ones and stick to them.

Do you retain data on applicants you screen out? Most companies do not keep in digital format all of the data that comes to them. Keep in mind that it is a good thing to aggregate information from multiple perspectives and over time. You want to launch a major Digital HR project? That is great. But before you do, determine what are the necessary and available data that can be extracted and transferred into a useable format at a reasonable cost. Data sharing across functions must become a priority in the short-term; to evaluate employees’ performance, you must know the financial performance of their units and the whole company. Invest in data standardization and platform integration across your company in the long run.

You do not have enough data to build an algorithm? Small data are often sufficient for identifying causal relationships, which managers need to understand in order to act on insights. Therefore, the less data you have, the more prior knowledge you will need (management theory, expert knowledge, and managerial experience). Randomized experiments are not to be neglected in order to test causal assumptions. Google became known for running experiments for all kinds of HR phenomena, from the optimal number of interviewers per job candidate to the optimal size of the dinner plate in the cafeteria.

If other companies are making their data available, make sure that your context is not too distinct so that the algorithm built on data from elsewhere will be effective for your own organization. You can also make use of social media as an alternative source of data: some employers use it for hiring, others to identify problems such as harassment. And of course, the issue brings forward an ethical problem: is using employees-related data out of bounds, or is it appropriate as long as the data is anonymized?

B) Using Machine Learning In the Hiring Process

In predictive recruitment and hiring, some machine learning algorithms can do better jobs than employers. However, finding good data with which to build such an algorithm can be challenging. Some companies rely on the attributes of its “best performers” but training an algorithm from top performers is problematic because we only examine those who are successful. It poses a problem of self-selection. The ability of the model to “keep learning” and adapt to new information disappears when the flow of new hires is constrained by the predictions of the current algorithm. Furthermore, if we consider the difference between majority populations and minority populations, algorithms that maximize predictive success for the population as a whole may discriminate against predictive success for the minority population. Generating separate algorithms for both might lead to better outcomes, but also to conflicts with legal and ethical norms of disparate treatment.  It remains therefore difficult to define fairness in machine learning algorithms. In fields such as marketing, ignoring these issues is not a big deal, but ignoring them in human resources might become very costly and result in legal consequences.

C) Decision-Making

There are numerous questions related to fairness that need to be raised. Is the algorithm biased? The presence of past discrimination in the data used to build a hiring algorithm is likely to perpetuate or to prolong the existing discrimination. Who can assure us that evaluators are not biased when evaluating candidates? Anyhow, algorithms could reduce that bias by standardizing the application criteria and by removing irrelevant information such as the race and sex of candidates. Legal challenges are a different matter. Letting people make the hires is likely to lead to far more bias than the one generated by an algorithm. Nevertheless, bias introduced by machines is easier to identify and could lead to potential class action lawsuits.

Between two candidates that are relatively similar and both qualified for the position, the hiring manager often chooses in an ad hoc manner. Suppose, an algorithm determines that one candidate is an 80% match for the position and the other one is a 90% match. Is a 10% difference large or small, taking into account some very likely measurement errors and biases? In order to mitigate some of these issues, we could introduce random variation, which has been an unrecognized but important mechanism in management. Contrary to popular belief, research shows that employees perceive random processes as fair in determining complex and thus uncertain outcomes. Therefore, if both candidates are strong, it makes more sense to make a random choice. In other words, randomization should be an AI-management tool.

In Machines We Trust, Do We Not?

When they want to hire, promote and reward, managers find it acceptable to entrust this decision-making process to algorithms. When it comes to punishing employees, the use of an algorithm raises a few questions: what if one day there were an algorithm predicting who will steal from the company or commit a murder? Can we judge an individual based on something else other than his/her own actions?

With machines, it is much more difficult to explain how an algorithm makes predictions because the model is often a messy combination of numerous factors that are much more difficult to understand than the old-fashioned and easy to grasp “more senior workers get preference over less senior ones”. In high stakes contexts, those that affect directly people’s lives – or their careers – explainability is the primary concern and will become imperative for the successful use of machine learning technologies.

A Process of Action-Reaction

Changes in formal decision-making unavoidably affect employees’ behavior. How will employees react to decisions made by an algorithm instead of a supervisor? Even if employees are not always committed to the organization, they might be committed to their manager. Let us look at the following example. In the workplace, if a supervisor assigns me to work on the weekend, I might do it without complaining if I consider the supervisor has been fair. When my work schedule is generated by a program, my response might be different since there is no existing relationship between myself and the algorithm. Yet, there are some decisions that are easier to accept from an algorithm especially when those decisions have negative consequences for us (increasing prices for example). 

These are a few questions you should ask yourself before introducing AI technologies in HR management. In sum, remember:

1. Causal explanations are essential for analytics and decision-making in HR because they can ensure fairness, can be understood by stakeholders and defended in the court of law.
2. Companies have to accept HR algorithms’ relatively low predictive power.
3. Randomization can help with establishing causality and partially compensating for algorithms’ low predictive power.
4. Formalizing processes of algorithm development and soliciting contributions from all stakeholders will help employees form a consensus about the use of algorithms and accept their outcomes.

essec knowledge on twitter

blog comments powered by Disqus

FOLLOW US ON SOCIAL MEDIA

This website uses cookies. By continuing to browse this site, we will assume that you consent to the use of cookies. Find out more about cookies.

x