Gender equality: is artificial intelligence a blessing or a curse?

Gender equality: is artificial intelligence a blessing or a curse?

 Since 2010, in the post #Metoo world, the causes and consequences of gender inequalities have come under increasing scrutiny from academics, policy-makers, consumers and the general public. Also during the last decade, concerns about the diffusion of artificial intelligence (AI) have attracted increased attention in the public debate. AI is a “general purpose technology" (GPT), the advances of which create a drop in prediction costs, especially thanks to the “machine learning" domain (Agrawal, Gans & Goldfarb, 2019), meaning the use of data to make predictions. One area that will strongly be impacted by AI is the labor market, a market where gender inequalities have been particularly studied by social scientists. The gender wage gap (the average difference between the wages of men and women) has been deconstructed to investigate the role of attributes (for example differences between men and women in years of education, occupational choices, years of experience…) and the role of discrimination (different effects of the same attributes). Discrimination is often measured as the part of the gap that is left unexplained after controlling for all observable differences between men and women. A difficulty researchers face when measuring it is to make sure that all differences are taken into account, as some of these differences may be hard to measure and not available in the data. Because AI contributes to lower data prediction costs, it is not surprising that the debate surrounding AI has also led to questions regarding the fairness of AI algorithms, or AI decision-making. Will AI algorithms help reduce gender discrimination, for example by improving predictions on workers’ productivity based on objective factors? Or, on the contrary, will they exacerbate inequality in hiring and remunerating workers? If we look beyond the labor market, how gender biased is AI? While answering these questions is an ongoing endeavor that will require increasing research resources, three considerations are worth taking into account. First, defining the correct benchmark (or counterfactual). Second, distinguishing between algorithms’ objectives and predictions. Third, when formulating policy advice, taking into account the consequences of informational asymmetries between regulators and AI users.

The role of the counterfactual

Examples about AI exhibiting gender biases have reached the popular press, shaping the public perception that AI leads to discriminatory decisions. Yet this evidence in itself is insufficient to discard AI algorithms. The key question for policy-makers may not be: “are AI algorithms prone to gender bias?” but rather “is the size of such bias bigger or smaller than without using AI algorithms?”. Indeed, the alternative to using AI algorithms is to rely on human judgment and decision-making. As extensive research shows, human decisions are often prone to gender biases. In recent work with my colleague Professor François Longin (Longin and Santacreu-Vasut, 2020), we show that this is the case in an investment context, an environment where decision-makers have the objective to maximize their gains and do not explicitly pursue a gender biased objective. Yet investment decisions are prone to unconscious biases and stereotypes that lead to biased trading choices, for example, selling stocks when a female CEO is appointed to lead a company. While this may not be the objective of traders, investors may predict that selling is the best course of action partly as a result of their gender stereotypes.

The distinction between objectives and predictions

The distinction between objectives and predictions is central in economic theory. This distinction is extremely useful to think about the fairness of AI (Cowgill and Tucker, 2020). Are the goals of an AI algorithm biased? Or are its predictions biased? To answer this question, it is important to distinguish between different types of algorithms, in particular, those that are fully automated versus those where a human is “in the loop”. Similar to investors in the financial market, programmers or the “human in the loop” may have unconscious biases that translate into biased algorithms even when the goal of the algorithm is unrelated to gender. Programmers may be biased because, like many of us, they may suffer in-group bias (Tajfel, 1970), meaning the tendency of individuals to distinguish between “we” and “them”, deeply embedded in our socialization process. As programmers may be mostly male, they may suffer from homophily: the tendency to interact with individuals from their own group, including their same gender. How then should we deal with such biases? Are legal tools beneficial?

Policies to counter biased objectives and biased predictions

Using legal tools may be beneficial to fight gender bias when regulators identify that the objective of an AI algorithm is biased. Yet using stringent legal tools can incentivize programmers and users to create less transparent algorithms, increasing the informational asymmetry between the regulator and the regulated regarding the algorithm’s objective. More radically, firms and organizations may decide to avoid using AI algorithms to decrease the scrutiny from its stakeholders as well as from regulators. Developing more transparent algorithms may therefore lead to a trade-off between incentives ex-ante and incentives ex-post.

The policy tools to fight biases in predictions, on the contrary, may need to rely less on legal tools and more on education. For instance, we should educate future decision-makers to undo some of their own biases and to recognize that data used by algorithms may itself contain biases. For the current generations, it is important to develop training programs that tackle the source of gender inequalities, namely human biases. In sum, whether AI will be a blessing or a curse for addressing gender inequalities will depend on fighting the root of gender prejudice: not machines, but humans. 

References

Agrawal, A., Gans, J., & Goldfarb, A. (2019). Economic policy for artificial intelligence. Innovation Policy and the Economy, 19(1), 139-159.

Cowgill, B. & Tucker, C. E. (2020) Algorithmic Fairness and Economics. Columbia Business School Research Paper, Available at SSRN:https://ssrn.com/abstract=3361280 orhttp://dx.doi.org/10.2139/ssrn.3361280

Longin, F. & Santacreu-Vasut, E. (2019). Is gender in the pocket of investors? Identifying gender homophily towards CEOs in a lab experiment.Available at SSRN:https://ssrn.com/abstract=3370078 orhttp://dx.doi.org/10.2139/ssrn.3370078

Tajfel, H. (1970). Experiments in intergroup discrimination. Scientific American, 223, 96-102.

 

FOLLOW US ON SOCIAL MEDIA