The ethics of artificial intelligence

The ethics of artificial intelligence

When pondering the ethical questions posed by artificial intelligence, it’s important to keep two main points in mind:

  1. Developing new technologies and advancing artificial intelligence is not an ethical issue in and of itself. It is not technology that poses the problem, but rather how we use it and how we dream of using it. This clarification is a tale as old as time, and it is crucial to remember it and to combat our naïveté when debating the ethics of artificial intelligence

  2. Today, this question appears in countless different forms and scenarios. This showcases the fascination inspired by “new technologies” and shows that artificial intelligence has progressed immensely. The biggest problem is actually that it is implicitly assumed or explicitly stated that artificial intelligence will not only surpass human intelligence in the future, and even that it has already done so. Among system designers, this results in the assumption that humans are ultimately only a source of error. One of the major consequences of this assumption are that the engineers who design the systems that are supposed to replace humans are de facto, without question, far superior in all senses – including ethically – to the humans who are supposed to manage said systems. This is particularly true when one is supposed to anticipate the ethical decision-making problems in potential emergency situations, such as for self-driving vehicles or in aviation.

Putting aside the grave ethical problems linked to short-term thinking and cost-cutting that Boeing, for example, experienced in the case of its rushed conception of the Boeing 737 Max, there is also an ethical problem specifically linked to its design, implementation and the (dys)functions of the MCAS system meant to ensure aircraft safety in the event of stalling due to a loss of speed.

When an airplane is losing speed and at risk of stalling, the MCAS system is supposed to induce its path downward to make it regain speed and recover altitude. An airplane risks losing speed when it pitches up, which is what tragically happened to the plane in a 2009 Air France Rio-Paris flight. The MCAS software was designed to automatically make the plane dive downward when the data signaled to the system that the plane was tilting too high and at risk of stalling. This is done without involvement of the pilots, as their vigilance and effectiveness was considered inferior to that of the automatic system.

But here’s the thing. If the system misinterprets the data, the system could then “interpret” that the plane is tilting up unnecessarily, even though it might be in a normal – and therefore essential – ascension phase after takeoff. This is what happened in the two catastrophic cases of the Southwest Airlines and Ethiopian Airlines in 2018, only three months apart.

This happened without the pilots being able to do anything about it, for the reason that they had not been briefed on the system’s functioning in the first case and misinformed in the second. In addition to the traditional ethically problematic aspects of the issue - information and training on overriding the system were optional functions that were paid for by the companies involved - there is the fundamental problem of assuming pilot incompetence versus system competence. It's as if we presuppose that it is so obvious that on-board electronic systems are infinitely more "intelligent" than humans, that we no longer even inform humans - in this case, pilots – of what systems do and how they do it as they take the place of humans. There is not only an ethical problem here, but also a political problem. It’s not the ethics of the so-called “intelligent” systems that are at stake. It’s the ethics of their creators, who are human, (all too human, to borrow from Nietzsche), and how they assume they know what is good for others, instead of letting others have a voice. It is not the machines that are at the root of this assumption of human incompetence, to the point of not even informing people what is being implemented and what they are directly impacted by as users. It is the men and women whose training in the ethical and political issues of the systems they manufacture is nonexistent.

We are at the heart of ethics, the continuation of which is, according to Aristotle, politics. To speak plainly, this “ethical” problem that we attribute to what is called “artificial intelligence” plays out on the backdrop of an eternal problem of all political life. It’s a power struggle, shown by the fact that some people consider themselves more knowledgeable than others, or even knowledgeable at all, and they treat others – users – like incompetent children. This is the problem of all tyranny, all dictatorships, all oppression. We find ourselves in the exact same problem when we realize that the product sold by companies like the web giants is the users themselves, that the robots are supposed to know better than they know themselves. The assumption here is that human behavior is predicted by past behavior: we are only supposed to love what we have always loved. So what meaning does the “future” truly hold?

The difficulty is that the problem is voiced quietly, so to speak, via the supposed objectivity and neutrality of the technologies and the intelligence supposedly presiding over them. Clearly, the ethical problems posed by artificial intelligence have nothing to do with the systems as such. The “ethical” problems linked to artificial intelligence are in turn linked to the idea that creators have of the relationship between humans and non-humans. Here, as everywhere, the biggest difficulty is that the victims of this dynamic are often complicit in the oppression imposed on them or the power exerted over them.

If it is urgent, to correctly ask the question about ethics in artificial intelligence, to read the Discourse on Voluntary Servitude by Étienne de la Boétie, it is even more urgent to keep in mind that machines are not responsible for what we make them to or what we dream of making them do. It is us, the humans, that are ultimately responsible for the machines we dream up. The ethical problems of what we call artificial intelligence are in fact the ethical problems linked to the excesses of human imagination. It is therefore Plato’s Republic that we must read to ask these questions in the best way possible.

FOLLOW US ON SOCIAL MEDIA