The dangers of artificial intelligence

The dangers of artificial intelligence

When we talk about the dangers of artificial intelligence, we imagine armies of robots breaking on the streets or a world where a central artificial intelligence would take control of humanity. But today, artificial intelligence presents much more mundane yet concrete dangers that we tend to underestimate.

The Role Predictive Models Play

In 2016, in Wisconsin, a certain Eric Loomis was sentenced to six years in prison for fleeing the police in a car involved in another crime. What is surprising is that the relatively heavy sentence was partially justified based on an artificial intelligence software that predicted that the accused had a high risk of becoming a repeat offender. However, this software, developed by a private company, gave no explanation whatsoever to justify this prediction. Predictive models in artificial intelligence, based on deep neural networks, are extremely powerful. They are also extremely complicated, and for this reason, they work as black boxes and are often impenetrable. But they are not flawless.

Explainable AI

For example, a Japanese department store linked its CCTV cameras to artificial intelligence software to count its customers, to recognize the presence of women, men, children, young people, elderly customers, automatically. But they realized that every time the software saw a man with long hair, it classified him as a woman.

In another case, a team of researchers trained AI software to recognize the presence of horses in photos. After a while, the software was becoming accurate. But when new pictures of horses were presented to the software, to everyone's surprise, the results turned out to be very disappointing. It was only then that the team realized that all the pictures of horses they had used in the learning phase came from the same website and that on each of these photos was a copyright in the lower left corner. The software did not learn to recognize the presence of horses, but the presence of a copyright.  These two examples could make you smile, but don’t forget the same kind of technology sent Eric Loomis in prison for 6 years.

Artificial intelligence software can be great at making predictions, and often makes far fewer errors than human experts. But they are not infallible.  In the field of artificial intelligence, many researchers are now dedicated their time and efforts to making algorithms more transparent. It’s called explainable AI.

This is all the more important as artificial intelligence software will learn from the data we provide. Now, you might think, great, the more data they will have available, the more accurate their predictions. And of course, this is true. But if these data are biased or prejudiced, the software will learn to reproduce these biases and prejudices in their predictions.

For example, in the United States, it has been shown that people of color are more likely to be arrested by the police, they are more likely to be prosecuted, and they are more likely to be convicted. Consequently, all things being equal, people of color are more likely to end up in the database as repeat offenders. And that database is the one that has been used to predict that Eric Loomis, a person of color, represented a significant danger to society. 

Opportunities and Limitations

Artificial intelligence offers incredible opportunities in countless domains. But we must remain realistic, it is not flawless. And in areas such as justice, health, social security, or military applications, where the human cost of a mistake is enormous, we must demand more transparency, today.

FOLLOW US ON SOCIAL MEDIA