TOWARDS A POLICY OF ALGORITHM SECURITY?

TOWARDS A POLICY OF ALGORITHM SECURITY?

While Elizabeth Warren and Bernie Sanders, both senators and candidates for the US Democratic presidential primaries, advocated for the dismantling of the monopoly of the “Big Tech” companies (the American web giants) [1] during their campaigns, the European Commissioner for Competition Margrethe Vestager has declared several times over the last few years that while she shares the same objectives of protection and freedom of users, the solution of dismantling by antitrust laws did not appear effective to her [2]. She finds it more useful to combine the promotion of competition with regulatory constraints such as, e.g., the General Data Protection Regulation or the recent Digital Markets and Services Acts. Even Mark Zuckerberg has himself at times called for more regulation of social networks, while his Facebook co-founder Chris Hughes goes further and advocates breaking the company down [3]. So what should be done?

To improve control by public authorities, some, like Hannah Fry, a mathematician at the University College of London who published a well-received book on data and algorithms in 2018 [4], suggest establishing a Regulatory Authority for Algorithms, following from the principles of the US Food and Drug Administration. Such a proposal deserves attention because it suggests that algorithms – today mainly those of social networks but tomorrow all those grouped under the term of “Artificial Intelligence” – present a potential for danger that must be evaluated before commercialization.

According to this health analogy, algorithms bring a benefit but can have harmful side effects (confinement in informational bubbles, addictive behaviors, collapse of democratic practices...). This is also the core of the criticism voiced by the Netflix-sponsored documentary, The Social Dilemma, that calls for a radical change of business models away from the “human attention extraction” model. This documentary features the Center for Humane Technology, whose president Tristan Harris wrote in the Financial Times (March 2020) [5] that the key issue is not only the ownership and reselling of data so much as the functioning of algorithms of social network platforms whose aims are to maximize personal engagement at any cost. He calls for the regulation of such platforms as “attention utilities”, subject to licensing that ensures they operate in the public interest.

Human attention and social bubbles

In this proposal, an independent agency could analyze algorithms ex ante via a “social impact assessment” and, if appropriate, allow their launch. Hannah Fry and Tristan Harris seem to go beyond the proposals of Bernie Sanders, Elizabeth Warren and Margrethe Vestager of surveillance by a public authority: they request administrative control a priori.

The rationale for such control is clear: algorithmic recommendations are there to make us react quickly, not to present us with all relevant alternatives so we can make an enlightened decision. Indeed, social platforms such as Twitter and Facebook are financed by advertising and their interest is therefore to maximize the depth of their network, the audience of the interventions and the amplitudes of the reactions in order to collect ever more information on the tastes, interests and preferences of their users. This allows them to suggest the most relevant ads to us or, like Netflix, who modifies the covers of films and series based on our reactions, to present us with suggestions that will have the most influence on our behaviors.

In the context of social media, this generates the so-called “Man bites dog” phenomenon according to which the most referenced, and therefore shared, information is not necessarily the most relevant but, the most surprising [6] (reversing, here, the most usual “Dog bites man”). Many online media are thus making themselves known through a race for provocative or surprising “news” (who hasn't seen a headline promising “you won't believe what happened to...”) that is ultimately not very informative. In return, social platforms do not gather information on our deep and well-thought interests. The message of The Social Dilemma is that maximizing engagement should not be measured through time spent online or the number of interactions. It should instead contain an appraisal of the quality of the interactions.

For want of such a focus on quality, success in recommendations is currently measured through engagement intensity, i.e., whether we respond to stimuli. Hence algorithms model our preferences and interests, relying on the history of our behaviors, sharing and reading activities to better understand us. It then boils down to predicting our future reactions. These predictions are however flawed in that they rely on the partial information that our history provides, not on the wider range of our potential interests. In doing so, they reduce the diversity of suggestions and our exposure to ideas that disturb us: acting as reinforcement mechanisms, they can lock us into an information bubble. This constitutes the main criticism made by Tristan Harris and the Center for Humane Technology: the information bubbles induced by social network platforms may lead us apart, foster divisions, and tear the social fabric. The Social Dilemma then foretells the end of democracy, using the example of French Gilets Jaunes (the Yellow Vest movement) who shared information on Facebook and WhatsApp. While it is true that the fake news that resides in these social bubbles render dialogue difficult, historians of social movements might argue that progressive groups often develop their own narratives that differ from the perceptions of established media, for instance in the US fight for civil rights in the 1950-60s, or sexual minorities in the 1970-80s. Arguments based on the Gilets Jaunes are therefore questionable: the issue is about the intensity and widespread prevalence of these bubbles, rather than their mere existence.

National Algorithm Security Agencies?

Creating a National – or European, or International – Algorithm Security Agency, as interesting as it may initially appear, forgets an essential element: algorithms of social platforms are not just scientific objects (mathematical or computer based) that can be assessed a priori, as they must be analyzed in a social sciences context through their middle or long term impacts. Short of presenting the dramatic consequences of Skynet in the Terminator movies, any social algorithm – a recipe to get a result (a behavior) using certain ingredients (stimuli) – automatically escapes its designer’s control. This is not just because social platforms are indifferent to studying the consequences of their tools, as participants in The Social Dilemma seem to imply.

Indeed, social network algorithms are just a new methodology to achieve an old goal – the very principle of any public policy, in fact:  influencing the behavior of individuals. History is dotted with our failures in this respect. Since the creation of statistical institutes, e.g., the US Census bureau or INSEE in France, and the development of polling techniques, public authorities and private companies have used data and statistics to analyze the behaviors of citizens and consumers: they, in turn, attempt to modify these behaviors to obtain specific results (economic growth, poverty reduction, increased sales...).

However, when it comes to influencing a person, the difficulty lies in that the latter’s behavior evolves in response to the influences he or she receives. We, humans, are like machines that change function, shape or mode of operation, as soon as something tries to nudge us on.

Taking human reactions into account

Social and economic sciences have long been studying the reciprocal influences between individuals and their environments, and, in this context, the question of modeling and control. Here, the important question is not whether an algorithm is harmful: as a recipe, it is designed for a specific purpose and generally works reasonably well in the short term. However, it is a partial recipe that only uses a fraction (even if many) of the possible ingredients. When it comes to generating engagement on social networks, because the notions of truth or quality are absent from the current algorithms and only popularity is taken into account, misinformation develops. Thus, the algorithm achieves its short-term goal, but its medium-term impact (polarization of information, lack of contradictory information and prioritization) is not within its purpose. This is the economic notion of externality, whereby companies internalize some benefits but externalize the negative consequences (such as pollution in an industrial context) to society at large [7].

In social sciences, any stimulus is known to modify not only the individual it affects but also the context in which she operates. Assume that you analyze the behaviors of humans and model them via an algorithm: when you try to use this algorithm to influence people, they find themselves in a new context since someone – now, you – is now trying to modify their “usual” behavior. This, in turn, generates new reactions that can potentially make the algorithm useless or even counterproductive [8]. For example, a famous Internet controversy concerned the use of search history “cookies” by airline companies that were used to identify one’s planned vacations to increase the prices of their flights.  When this debate first appeared a few years ago, many Internet users played with their flight searches in order to disrupt the cookies and obtain, contrary to algorithmic forecasts, lower prices [9].

The human dimension is not sufficiently taken into account in algorithms that come from engineering sciences where individuals are seen as black boxes: their thoughts are not perceived but their consequences are measured through the resulting actions. In reality, humans think about the influences exerted on them, and they can counteract them.

Ensuring long-term sustainability

In such a context, it seems illusory for an administrative authority to control ex ante the tools of artificial intelligence because their medium-term consequences are almost unpredictable given the number of actors that hold an influence. An adequate answer to the question posed by The Social Dilemma may therefore be found outside the context of drugs and medicines, but closer to that of controlling inflation.

Public authorities have long aimed to avoid the twin pitfalls of inflation that is too high (hyperinflation causing political instability in the 1920s) or too low (the deflation leading to impoverishment in the 1930s). Moderate inflation is optimal, but it is an unstable equilibrium, the result of the individual decisions of millions of individuals and companies, decisions that are themselves the result of people's perceptions of their environment (past, present and future) and the decisions of others (competitors, suppliers, customers...) 

After thinking that governments could directly control prices (e.g., the price of bread in France until the 1980s) or policy tools, the consensus in academic circles over the last thirty years has been that the agency in charge of inflation control, the Central Bank, deserves to be independent and in full possession of the relevant tools – not those of direct control of individual prices, but of individual decision making (via interest rates) and supervision of major market operators (banks and financial institutions). In this context, the role of the government is merely to set the objectives of monetary policy (low inflation and, in some countries such as the US, full employment). Central banks were made independent to convince the population of their sole pursuit of mandated medium-term objectives, away from short-term political considerations (that may come into play during election time). We saw an attempt at shifting this status quo when President Trump threatened to replace the Chair of the Federal Reserve to exert an influence over policy tools (interest rates) [10].

A Central Bank of Algorithms

Rather than imposing a priori administrative approval by an Algorithm Security Agency, controlling Artificial Intelligence algorithms may be better achieved by an independent authority that directly supervises AI companies and imposes certain basic algorithms. It could, for example, monitor “essential” algorithms, possess the ability to modify them and obtain daily impact measurements (as the Central Bank checks every night that private banks balance their accounts). This independent agency, this Central Bank of Algorithms, could thus reintroduce a focus on the medium term and the evolution of society in accordance with objectives set by governments. It could also monitor the degree of concentration of current Internet platforms, to avoid the emergence of companies that are “too big to fail” and endanger the whole system.

Its ability to act directly, its independence and its focus on explicit objectives would help foster systemic trust by all individuals and businesses. This confidence is the key factor that makes it possible to better anticipate the reactions of individuals to the stimuli received: it improves reactivity and facilitates the resolution of the key issues posed by information bubbles and misinformation. As with financial innovation that is constrained by regulation to avoid major economic crises (which happen nevertheless when regulation is lowered), the development of artificial intelligence might be slightly slowed but with an objective of public interest and a benefit of long-term sustainability 

References

1. https://medium.com/@teamwarren/heres-how-we-can-break-up-big-tech-9ad9e0da324c

2. https://eeas.europa.eu/delegations/united-states-america/43309/commissioner-margrethe-vestager-press-conference-washington-dc_en

https://www.competitionpolicyinternational.com/eu-vestager-says-breaking-up-facebook-would-be-a-last-resort/

3. https://www.ft.com/content/0af70c80-5333-11e9-91f9-b6515a54c5b1

https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html

4. Hannah Fry (2018), Hello World: How to be human in the age of the machine, 2018, London: Penguin.

5. https://www.ft.com/content/abd80d98-595e-11ea-abe5-8e03987b7b20

6. For an application to Macroeconomics, see Nimark (2014), https://www.aeaweb.org/articles?id=10.1257/aer.104.8.2320

7. Katz, M. L., & Shapiro, C. (1985). Network externalities, competition, and compatibility. (3), 424-440.

Wattal, S., Racherla, P., & Mandviwalla, M. (2010). Network externalities and technology use: aquantitative analysis of intraorganizational blogs. Journal of Management Information Systems, 27(1), 145-174.

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-36.

8. This is an example of the famous Lucas Critique that was introduced in economics by Robert E. Lucas (1995 Economics Nobel Prize recipient):

Lucas, R. E. (1976). Econometric policy evaluation: A critique. In Carnegie-Rochester Conference Series on Public Policy 1(1), 19-46).

9. https://time.com/4899508/flight-search-history-price/

http://www.businessinsider.fr/us/clear-cooking-when-searching-for-flights-online-2015-9

10. https://www.newyorker.com/news/our-columnists/the-high-stakes-battle-between-donald-trump-and-the-fed

ESSEC Knowledge on X

FOLLOW US ON SOCIAL MEDIA