Intelligence: natural vs artificial?

Intelligence: natural vs artificial?

Management requires intelligence, a capacity to understand, link things up and act. At least the study of finance, strategy, marketing or logistics provides the feeling of controlling reality but this is less the case for human resources or management. These fields touch on the mystery of human behaviors and if it is difficult to maintain good relations in the context of family and friends, it is certainly no easier in the professional world where it is a matter of producing results.

On the subject of management, tomorrow’s experts are many. Art is more difficult to tame and it is for this reason that since the dawn of humanity we have never ceased to question ourselves and write on human behavior in the field of war, religion, love, and – more recently – working together. We are thus always on the lookout for more intelligence and, as natural intelligence seems not to satisfy this thirst, we now turn with hope towards the artificial

In a recent article, Davenport¹ observes current projects underway in artificial intelligence on a sample of companies. He identifies three categories of projects. The first, in the majority, consists in accelerating and automating the gathering and linking up of data in various IT systems. The diversity of information systems treating the issue of personnel obviously renders this perspective very interesting – likewise for the need to finally exploit the results of annual performance appraisals.

A second reference to artificial intelligence relates to deep learning or machine learning when we trawl enormous quantities of data for links, regularities and structures of proximity that enable us, for example, to receive targeted ads according to our previous browsing or purchases on the internet. Such possibilities are obviously useful for selection and recruitment, but also in assessing management via the tracing of actions, decisions or managerial behaviors.

The third category of AI projects concerns ‘conversation’ enabling a client or employee to interact with a machine that interprets requests and formulates appropriate replies. Without yet being in the year 2050 where we can imagine machines that can feel², these systems used, moreover in the majority, as Davenport says, with employees, provide the answer to all the questions that may concern them.

The article does not let itself go to the extremes of possibilities. Rather, it seeks to identify what is already at work in AI projects. But the perspectives it thus opens cannot leave management indifferent – a management still limited by the frontiers of natural intelligence. In effect, the processing of human issues in organizations is structurally confronted with three major problems. Problems that are permanent and for which we are still vainly looking for a definitive solution.

The first problem is that of measuring. Because managerial decisions exist, we need instruments to describe and compare the options before choosing the best solution. Finance or marketing possess these units of measurement. Moreover, we can always question ourselves, in terms of finance, for example, on the quality of these measurements: do they correctly represent reality? But the great advantage that financial people have is not the intrinsic quality of these measurements but the fact that every stakeholder agrees on the manner in which to measure and describe reality. This is not the case for things human: not only is reality difficult to describe but, even more, nobody agrees on the units of measurement. There is a world of difference in the way in which a line director and an HR director may assess an applicant! We can legitimately expect from AI that it enables other measurements to be applied on decisions, actions and results, for example.

The second permanent problem is that of theory. To tackle the mystery of things human, it is essential to multiply perspectives, change angle, question the limits of our perceptions to which we are far too often tempted to reduce reality with. Through its capacity to aggregate enormous amounts of past data by deep learning structures, regularities and recurrences, AI opens up new perspectives – not the unveiling of mystery but other ways in which to describe the real: it is the case, for example, of managerial behavior analysis…or traces left on the social networks by job applicants.

There is a third issue, a little less ‘managerially correct’, to which HR specialists are confronted – that of human risk. In every management discipline, from information systems to finance to logistics, the question of managing is as much about money as controlling risk. It is likewise for human issues. Moreover, since a century or so, the preoccupation with reducing human risk has indeed been a constant issue in the development of organizations. Production lines rooted in Taylorism boil down to being independent of operators’ skills; processes and information systems mean supervising human initiative; artificial intelligence and robots is the means to ‘increase’ man, but also that of not taking a risk regarding the limits of his capacities.

We can therefore understand the current appetite for artificial intelligence because, confronted with these structural problems posed by the management of humans in our organizations, it is clear that artificial intelligence cannot suffice. However, it cannot be forgotten that artificial intelligence has other properties, in particular that of nourishing or strengthening illusions to which man is very sensitive – those of control, virtue, and the philosopher’s stone.

Man dreams of controlling everything – we have known that since the appearance of the oldest texts – and he gives regular proof of it. The dominant management culture is that of control and the organizational systems or processes, in which organizations have invested so much money over the last few years, that also, but not only, contribute to the thirst for control. AI, with its learning capacities, controlling and exploiting piles of data inaccessible to humans answers the seduction of control.

There is one thing in particular which man dreams of controlling – and something that seems to him the most uncertain: not so much the mystery of his contemporaries but of the future. AI meets this need because it enables the capacities to analyze the past to multiply in order to extract the scenarios of the future – as if past behaviors predicted the future and as if the past frequency of behaviors strengthens the probability of what is to occur. This is indeed a reality, except when, as Taleb tells us, the ‘black swans’ – an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight – appear³.

Fears and expectations with regard to AI convey a limiting image of the future: as if it were the past with, in addition, the new tool and its possibilities that have transformed a frozen present. When micro computing developed, everybody betted on the disappearance of paper and it is the contrary that the vendors of ink and printers have benefited from. When the French declare their yearly incomes on the internet, the fiscal administration hopes to save time, but this virtual and interactive mode of tax form transforms tax payers’ attitudes and they begin to change their declarations, asking questions to which the administration spends vast amounts of time in answering. We always have trouble imagining what an innovation will transform – not in the usage aimed for in the beginning, but in all those around it. It is for this fact that the figures portraying job losses caused by AI must be taken with much care.

The second illusion is that of virtue. Some say that the development of artificial intelligence will enable human resources managers freed of tedious tasks to devote themselves to the noble tasks of their function such as relations, listening to employee concerns and looking after co-workers. This might remind us of the time when people hoped that reducing the number of working hours would free time for relations, family and commitments to clubs and societies: in all evidence it rather seems that it is television and the ‘screen geeks’ that have benefitted the most. The Rousseauism remains in which the fundamentally good individual is prevented from being so by the villainous society or organization; once boredom rid of, he will obligatorily turn towards what has more human value. This remains to be seen: for managers and HR managers to turn towards a more relational and human approach in their job, they must, above all else…want to do so.

The third illusion concerns performance. I speak of illusion because, curiously, performance constitutes the unmentionable word of people management. According to the situation, we can implicitly favor such or such a cause of this performance. If you consider that the quality of organizations, structures and processes are enough to generate performance, then AI will obviously be of great help in increasing it; it enables people and the use of their freedom to be replaced, increased or adjusted. The problem is that business performance can have other causes such as, for example, the involvement of people in what they do, the quality of a relation or the attention they give. There are tasks where this personal investment makes performance happen and intelligence (at least until the year 2050 according to the specialists) will not be enough.

The question of the stakes and possible consequences of the development of AI is not a simple one and, as Davenport demonstrates, there is at times a gap between reality and the hyperbole of all these new experts. To help with discerning things, we can evoke three red flags. The first consists in constantly questioning our attitude towards innovation: is it fear, naive submission to novelty? Or is innovation seen as a means to help my business or as an imperative? In these situations, one should not forget that in the gold rush the real winners were those who sold the shovels.

The second red flag to watch out for is to remain attentive to the new needs in skills required for students, but also current employees; these new skills are not only necessary for specialists who ‘do’ AI, but also for professionals in each sector and trade who see their role and practices changing.

The third red flag is that in periods of history where everything seems to change, it is always useful to come back to what anthropology reveals as relatively unchanging in human nature: the opening out to history, philosophy and even good sense that make it happen. Until now, man has always succeeded in foiling plans aimed at dominating them. What does that tell us about the future – human nature or black swan?

Article translated by Tom Gamble, Council on Business and Society, first published in RH Info 14/3/18 under the title Management et Intelligence Naturelle


¹Davenport, Harvard Business Review
²Alexandre, L. La guerre des intelligences
³Black swan theory

FOLLOW US ON SOCIAL MEDIA