The algorithmic frontier for LGBTQI+ rights

The algorithmic frontier for LGBTQI+ rights

In 1993, the United Nations created the position of High Commissioner for Human Rights, whose work has increasingly been devoted to ensuring that these universal rights are not restricted along “nationality, sex, national or ethnic origin, color, religion, language or any other status”. Today, this also includes sexual orientation, and hence LBGTQI+ individuals in particular. For this reason, a number of governments have decided to appoint Special Advisors, Envoys or Ambassadors for the rights of LBGTQI+ identifying persons (for example Argentina, Australia, Britain, Canada, Costa Rica, Italy, Thailand, and the US). France recently joined this group when the Prime Minister Elisabeth Borne created the post of LGBT+ ambassador and appointed Jean-Marc Berthon to the post. The presence of ambassadors or envoys is interesting, as it shows that many of the issues pertaining to discrimination are not limited to national borders nor to basic civil rights, but also encompass economic and social rights brought about through diplomacy. This is also the case on our continent, where the European Commission aims for a “Union of Equality” that brings about a European Union where LGBTQI+ people flourish like others.

Moving here towards this Union of Equality in Europe may also percolate outside our continent, judging from the European Union’s success in creating international standards. For instance, the General Data Protection Regulation (GDPR) has impacted legislation on digital privacy in many countries worldwide, and it has brought significant improvements in governance, monitoring, awareness, and strategic decision-making on the use of consumer data. The Internet, social networks – soon metaverses: their extensions into virtual or augmented reality – and their algorithms are indeed the next frontier to be secured, as hate and discrimination cross national protections. The time has come to address these issues, now that the European Commission (EC) has been implementing regulations on digital markets and services over the last few years, and is now considering an Artificial Intelligence Act.

As the 2022 Social Network Security Index published by GLAAD in the United States purposefully shows, LGBTQI+ populations face specific issues online that require targeted responses. Indeed, the Internet plays an educational  and socialization role for LGBTQI+ that differs substantially from that of other communities: LBGTQI+ individuals are mostly born to heterosexual parents and, very often, neither the latter, nor other members of the family, are educated for, or capable of accompanying their children in this discovery. The Internet has become the default liberating and socializing tool for personal development that provides access to information and discussions with others. 

It is in fact likely that the apparition of the Internet constitutes one of the main causes leading to enhanced self-affirmation and LBGTQI+ visibility: according to a study by Gallup carried out in the United States in 2020, 21% of members of Gen Z (born with the Internet between 1997-2002) define themselves as non-heterosexual, but only 10% of those born before 1945 do the same (though the AIDS epidemic, having affected predominantly the Baby Boomer to Gen X among  LGBTQI+ cohorts, is also to blame). 

This increase in visibility has a dark side: social media algorithms are potentially capable of identifying sexualities and classifying them. In theory, this is not not explicitly done via defined categories, as neither the European legislation nor national Data Protection regulators allow for recording an individual's sexual orientation. However, as they can analyze people and their interests, all social networks identify profiles and use them for commercial purposes. In a sense, they know who is gay (or other) without even asking! So even if sexual orientation is not a category officially used in their algorithms, ads on Instagram, Twitter or Netflix suggestions can implicitly target LGBTQI+ identities, bringing for example the benefit of recommendations aligned with their interests. This constitutes a negative consequence of what has been termed surveillance (or targeted/behavioral) advertising.

This implicit classification needs to be closely monitored because of its potential dangers. The primary risk is that of bias and algorithmic discrimination. Indeed, statistical and machine learning tools are based on the analysis of past behaviors and can therefore reinforce prejudices. For example, if many users find two men kissing offensive, a poorly calibrated algorithm will follow the opinion of the majority and ban such images (this has been the case with Instagram). Similarly, if the designers of an algorithm force a classification of women/men, non-binary people (about 14% of the adult population – 18 to 44 years old – in France according to a YouGov-L'Obs study) will probably face discrimination. 

In order for the Internet and social networks – metaverses tomorrow – to be fully respectful of individual rights, if not to constitute safe spaces, it is necessary to identify how they implicitly classify sexual identities and to grant everyone true control over their data. This must be done with legal oversight and on the basis of information that is truly understandable (contrary to the agreements we often grant on the use of our cookies on the web). We can then imagine that social networks might explain to everyone which of our past actions lead to specific recommendations. This would allow us to delete only a fraction of our data and give us effective means to really control our analyzed profile. 

We also need to find new ways to control the use of implicit categories, and this cannot be done only by new algorithms. The solution may not lie in the brigades of moderators currently employed by Meta/Facebook (or others) to filter images and who are so traumatized by the content to which they are exposed that they have won lawsuits for mistreatment by their employers. This oversight could be done directly within social networks and based on enhanced valuation and positive algorithmic discrimination. The 2021 GLAAD report proposes to increase the visibility and impact of users with a benevolent outlook – those acting  as beacons illuminating and guiding the decisions of others. For such "beacons" to emerge and influence others, we must all question our own values and learn to communicate them online.

It has always been known that the fight against hate and discrimination cannot only be carried out nationally but that it transcends frontiers. Yet, it must be brought also on the ether, the digital world that unites us all, with its own local weapons, those of algorithms. 

ESSEC Knowledge on X