So many ways to be wrong about the future of AI, but it'll outsmart us in the end

So many ways to be wrong about the future of AI, but it'll outsmart us in the end

In 2015, I began to examine how artificial intelligence could be used in management. AI was becoming popular again, so it was high time to understand if - and how - the tools had evolved. Could AI be used in ways other than in well-defined cases like image recognition? Had it begun to have the flexibility necessary for discovering complex relationships in complex data, like those between individuals, teams, performance, and emotions in a company?

After a few years of investigation, I decided to postpone this project, and to prepare for a new winter of artificial intelligence, a time of slow progress, during which we wonder if the technology will ever truly take off.

The current AI algorithms are undeniably impressive. Their ability to respond to predetermined questions is extraordinary: for example, they can now identify cats, individuals, and cars, in increasingly natural situations.

But for each new use of AI, there are even more cases where obstacles hinder its practical usage: data that is unavailable, not properly formatted, or too “noisy”; an absence of analytical ability and trained personnel; ill-adapted technology; legal or ethical issues, etc. The list of good reasons for not using AI, even in a narrow domain, could be a research subject in and of itself.

Paradoxically, even in the face of the current multiple obstacles, AI will likely reach its ”Holy Grail”, and become indistinguishable from human intelligence.

Can AI be intelligent in the human sense?

The current practical obstacles are small compared to the immense difficulty of reproducing typical human cognitive processes. For example, neither intuition (cognitions emerging out of the historical accumulation of experiences) - nor emotions (which provide autonomy through “motivated cognitions”) - are currently properly simulated in AI systems.

If AI can address specific questions in specific domains with well-defined data, does it have the ability to find useful relationships in heterogeneous and complex dataset? It is in this capability to produce a logic in a vague, yet motivated, way that humans still set themselves apart from current machines.

Experts differ in their visions of the future: some talk about AI as a sure thing that is on the verge of taking over the world and surpassing humans. Others swear that machines will never display intelligence in the same way that humans do. Who to believe? Does our world overestimate or underestimate the speed at which AI will progress toward general intelligence?

AI: a narrow scope in the short term, a broad scope after future updates…

Paradoxically, the answer is… “both.” This paradox comes from a classic innovation phenomenon. When a field evolves, change does not happen in a linear, gradual way. The response of the human, social, and technical community to an evolution follows S-curves: at the beginning, progress is slow, then there is an explosive uptake, then it slows down again. New evolutions accumulate over time, and combine in a way that can be hard to interpret. Take a look at a visual depiction of this in the figure below, where the three S-curves represent three successive waves of innovation.

Different predictions are possible depending on what we look at. If we focus on the first wave of innovation (#1), which is no longer progressing, we might conclude that technology will not amount to much - an overly pessimistic prediction. Paradoxically, we will also be too pessimistic if we look at the budding wave #3, which has not yet progressed very far.

On the other hand, if we project the speed of the current wave (#2), we overestimate progress and project the future overly optimistically. 

 

This logic is inaccurate: in reality, the progress from the second wave will calm down, but the progress anticipated from wave #3 will indeed occur, and will likely be followed by progress to come in future waves #4, #5, and #6. It is this succession of innovations, including those that are currently unimaginable but still to come, that forms the real trajectory. Paradoxically, the real trajectory is somewhere in the middle and hard to imagine, because it does not fit any of the current trends

AI in progress

Imagining the future of AI is difficult and troubled by contradictory perceptions about progress. Let’s take a step back and remember that AI belongs to the field of computer science, a field that developed over several centuries.

The idea of the calculator can be traced back to the Age of Enlightenment, but at that point, it was thought that the invention would use mechanical machines and therefore be very limited. In the mid-20th century, electromechanical developments and then the first generation of electronics changed the scale of technology, and the newfound power of calculators gave them practical applications in the workplace (like the emergence of IBM). At the end of the 20th century, the miniaturization of semiconductors led to a new wave of progress and widespread usage (like the emergence of mass technology, e.g. the PC). Finally, from the dawn of the 21st century, the widespread availability of machines and networks changed the game again, with the Internet wave, mobile devices, and “smart” objects.

At each step, technology drew a lot of pessimism (“but machines are very limited!”) and a lot of optimism (“machines will take over the world!”). So, for AI technology, where are we at? We are in the middle of the road, in the uncertain position of having only observed two waves of progress thus far.

The first wave, starting in the 50s and going full speed in the 80s and 90s, assumed that machines would prove their intelligence in a “symbolic” way. AI was modeled as the work of an engineer manipulating concepts through logic. For example, in expert systems controlling nuclear power plants, AI took the form of a system of rules of the form: "if (alarm = triggered & valve = open) then (close valve)". This was very exciting and gave rise to a lot of studies and some successful implementations.

Unfortunately, this led to an impasse, as it required humans to program it all explicitly, in a circularity that lacked growth potential. The 2000s brought about a first winter for AI specialists, a phase in which there was not much hope of drastic future progress.

At the beginning of the 2010s, the situation was turned upside down: after decades of obscure studies, researchers working on computer vision produced impressive results relying on statistical models. Paradoxically, this had not initially been considered as AI since, at the time, AI was considered as symbolic manipulations. Their method was dubbed “deep learning” (DL), as it was based on a large number of virtual neurons. With deep learning, machines can execute relatively sophisticated tasks, like recognizing a cat in the millions of pixels of an image.

Above all, these techniques allow for relatively automatic learning, called “Machine Learning”, as long as we provide the machine with a massive dataset and the human designates what to search for. For example, given a large number of photos tagged as having a cat or not, we can automatically train an algorithm to “spot cats” in photos in general.

We are in the middle of this second generation AI boom, and the results are impressive. If we focus only on the recent successes of Machine Learning, we might conclude that the sky's the limit, that AI is going to take over the world … now! This is an illusion, and corresponds to the “too optimistic” projection in the figure, overestimating the future.

In fact, Machine Learning’s strength lies in its simplicity, hence the criticism that this type of AI is nothing more than sophisticated statistics. Slightly insulting, this suggests that statistics can only be a method by which a hypothesis made by a human can be validated in data. ML is indeed mainly statistics, because humans are always supposed to play the central role in imagining the relationships between the entities of the world, and the machine performs them automatically. Note that the machine has not "imagined" anything yet in this approach...

Current ML is limited to predefined tasks: we cannot assign it more complex tasks like automatically cleaning and choosing data, detecting relationships between variables, or identifying variables that might be of interest. Above all, this AI does not suggest explanatory mechanisms: for example, determining that "the photo contains a cat because it is an animal with pointed ears and a mustache”. Not only does this technology not suggest anything, but it is also not able to tell us clearly what cues were used because current algorithms are not yet designed for such explication.

Regarding all these questions, the slow progress in current technology suggests that we will never succeed, that human-like intelligence is too complicated, that our expectations are too high. This, too, is an illusion, illustrated by the "pessimistic projection” in our figure, which underestimates the future.

The real trajectory, like any uncertain prediction, is difficult to perceive, landing somewhere between those two trends. AI will have to go through many waves of progress to eventually reach a human-like form of intelligence. This progress will most likely take place, but in the meantime, we will probably go through other "AI winters,” the phases during which things move slowly and when no one has faith that the future is bright.

The path toward a general artificial intelligence

At the moment, ML technologies have a single layer: the human specifies the inputs and uses the results. Sometimes an engineer may manually decide to take a result from a first algorithm and input it as the input into a second algorithm. But in the future, there is nothing to prevent this looping from being put into the ML process, and that the relationships will be automatically chained together much like in the human brain. Here are the updates to expect for this to materialize.

The first step forward is to take into account the symbolic in current machine learning techniques. At the moment, ML is efficient for processing a lot of rather continuous data (sounds, images) in order to guess a pattern (distinguish images with cat from those without cat). Unfortunately, for the time being, ML does not work on such symbolic data, meaning discontinuous data in smaller quantities.

The second important advance is the ability to identify causal relationships.  Current algorithms are unable to identify, organize and test a logical system based on the data, and are unable to build causal inferences alone. Inference is defined as guessing which factor (e.g., gender, education, etc.) influences which other factors (e.g., salary, promotion, etc.). Current AI can help confirm such relationships, without being autonomous at imagining and proving causality.

The first two important advances to be expected from ML are therefore to be able to automatically detect all categorizations in the data (symbols) and to start linking the symbols together, in a causal way if possible. Once it is able to automatically detect all categorizations in the data (symbols) and to start linking the symbols together, it will then be possible to manipulate them with the old techniques developed in expert systems. These allow recursion, the ability to make inferences about inferences, i.e. to reason about reasoning. The third fundamental advance in AI will therefore be the combination of expert systems with ML techniques, combining the symbolism and recursion of these old techniques with the massive computational scales permitted by the new ML techniques.

Winter is coming while we wait for machines to become motivated

While we wait for these to occur, only humans know to build knowledge, i.e., to pick what to analyze, to form hypotheses, to check them, etc. At best, humans can use current AI technology as a sophisticated statistical helper.

The technical progress required for AI to be able to contribute relatively autonomously to the knowledge development process is astounding. First of all, it necessitates phenomenal volumes of computational power compared to current capabilities. For comparison, human brains are several orders of magnitude more efficient than silicon, both in computational power and in energy consumption. But Moore's law has never failed so far, and it is therefore a safe bet that computational capacity will continue to rise in an ever-increasing and astonishing way.

Even more critical is the fact that these recursive calculations must be conducted continuously permanently, hence all cannot be carried out "ad infinitum" on all items. It will therefore be necessary to invent a computer science based on learning tradeoffs: not only will the machine decide on its own to initiate a search for inferences, but it must also know how to stop and be satisfied with a good enough model. It must also know when to resume learning when necessary.

This process could look a bit like what we call “motivation” in human cognition. Essentially, humans are constantly learning about their environment, motivation being a crucial mechanism in the choice between Learning vs. Acting. Machines, on the other hand, are simplistic in that the expert operating them currently decides when and how to do calculations. As with mankind, intelligence will only appear when machines exhibit a form of free will. So far, the modeling of these forms of emergence, of motivated cognition, has not really begun.

All in all, there is no reason not to imagine that AI will become much more flexible and user-friendly than today's limited algorithms. Nevertheless, it is likely that it will take a long time, at least one long winter, or even several successive winters, before reaching that mythical day when the machine is as intelligent as a human. Astonishingly, this artificial intelligence, like human intelligence, will only emerge out of strong motivational mechanisms.

FOLLOW US ON SOCIAL MEDIA