With ESSEC Knowledge Editor-in-chief
The rise of artificial intelligence has naturally seen people applying it (or attempting to apply it) in countless ways, with varying degrees of success. While artificial intelligence can be a powerful tool- in the right hands, in the right situation- it is not as easy as implementing a system and tapping a few keys. This is perhaps especially true for environments that deal with human behavior, like marketing. As the saying goes, “with great power, comes great responsibility”: and marketing managers must be aware of its potential pitfalls to avoid problems. Equally important is the need to know how to properly deploy their AI tools to avoid squandering both its potential and their company efforts and resources. By understanding AI’s pitfalls, marketing managers can make the most of its opportunities.
So far, AI’s biggest advancements in the business world have been related to deep learning, referring to complex, multilayered (i.e., deep) neural networks, solving difficult problems with predictive analytics. The more layers in a neural network, the more complex it is, and more “layered” networks can identify and learn more complex relationships between variables. This means artificial intelligence can learn to uncover relationships that existing statistical techniques cannot detect and that it can learn to do so autonomously. This is the main selling point of contemporary AI algorithms.
While the ability of AI algorithms to autonomously create models is its strength, it is not without its challenges when it comes to putting it in action. These challenges are: a lack of common sense, objective functions, a safe and realistic learning environment, biased algorithms, understandable and controllable artificial intelligence, the paradox of automation, and knowledge transfer.
Lack of common sense
What do we mean by a lack of common sense? It is not an insult to its programmers or to those operating it; no, we mean that the algorithm itself lacks what we humans call “common sense”. We know that emotional intelligence is important, and indeed AI systems are increasingly able to recognize people’s emotions, through image recognition, voice analysis, or text analysis. But recognizing emotions is a far cry from understanding and feeling them. An AI system could learn that the words “queen” and “crown” are linked, and could even use them appropriately in a sentence, but the meaning of the words and sentences would be lost on it. Anything they have approaching common sense must be programmed into them by a person, which becomes a problem when it comes to objective functions.
Objective functions
An objective function is one that specifies the result that the AI algorithm aims to optimize (Sutton and Barto, 2018). In a marketing context, this could look like profit maximization or customer retention. The “freedom” of AI from common sense hinders its ability to define an objective function. It might be that humans understand something implicitly, but then have a hard time translating this for the algorithm. This might go awry: an autonomous car directed to “get to the airport ASAP!” might get there in record time, but having mowed down pedestrians and sped through red lights on its way. While the previous example is obviously extreme, we have already seen consequences of this play out in real life, with gender- or racially-biased systems. An outcome like profit maximization cannot be considered without allowing for the legal, moral, and ethical implications, which marketing stakeholders need to keep in mind when building and implementing their systems.
Safe and realistic learning environment
As you can imagine, all this is easier said than done. Knowledge transfer from the expert to the algorithm and vice versa is one of the biggest problems facing AI today, and the potential for costly mistakes is enormous. To avoid the fallout, it is important for AI algorithms to learn in a safe, realistic environment. Safe, in that if they do make mistakes, there is less impact on the business, and they avoid the marketing equivalent of running a red light. Realistic, in that the data resembles what they would receive in a real-life situation. This presents a challenge in marketing, because customers can be unpredictable, and a new factor (like, say, COVID-19) can throw a wrench into the best-laid marketing campaigns. While it might be tempting to think that AI reduces or even eliminates our need to understand customer behavior, it is the opposite: we need detailed customer behavior theory more than ever, as this will help us better configure our AI algorithms.
Biased algorithms
This brings us to another limitation to AI’s use in marketing: its potential to be biased. Of course, the algorithm itself is not prejudiced, but if it is powerful enough, it could identify a characteristic like race or gender on its own and make biased predictions. How so? It might pick up on other information that acts as a proxy to the factor in question, like education or income, thereby unintentionally replicating the biases that are found in the data. In a marketing context, this could lead to outcomes like a price-optimization algorithm that aims to charge women more or an advertising algorithm that targets a vulnerable population. This has legal implications as well as the obvious ethical ones. Complicating the problem is the fact that adding the sociodemographic variable in question to the model in an attempt to clarify it could just make it easier for the algorithm to make prejudiced predictions. If marketing stakeholders do not properly understand the algorithms they are using, they might not know to challenge these troubling predictions.
Understandable artificial intelligence
The ability to understand and explain the model is another factor in the uptake of AI. If you are going to use an AI model, you need to understand why it makes the predictions it does, and to be able to interpret what the model is doing. More specifically, an AI’s human “handlers” need to be able to explain: 1) the purpose of the model, 2) the data it is using, and 3) how the inputs relate to the outputs. By understanding this, it is also possible to know why the AI system is preferable to a non-AI system.
Controllable artificial intelligence
Using the term “handlers” above was intentional: an AI system must be able to be controlled and overridden. This might conjure up images of I, Robot and killer robots, and while the reality is rather less lethal, it is still serious. One recent example is that Uber’s pricing algorithm responded to the crush of people fleeing the scene of the June 2017 terrorist attack in London by adapting (read: increasing) the ride prices to more than double the typical fare. Anyone who has taken Uber is unfortunately familiar with their surge pricing system, but in the aftermath of a terrorist attack, it made Uber seem like ruthless profiteers. However, Uber’s monitoring system quickly flagged the problem, and they had mechanisms established that allowed them to override the algorithm within minutes. They were also quick to communicate about what was going on, made rides free in that area, and reimburse those affected. Alas, the damage was done. This situation left a black mark on their reputation and serves as a warning to marketing managers that any algorithm they implement needs to be constantly monitored and have the possibility to be overridden built in.
The Paradox of Automation
The purpose of automation is to replace the role of humans, aiming to make tasks faster and more accurate and leaving people free to do more complex work. The downside to this is that then people don’t have experience with those simpler tasks and don’t have the opportunity to gradually build up their expertise and skills. In marketing, this could mean that those in marketing, from customer service agents to market research analysts, miss the opportunity to hone their skills on simpler and more repetitive tasks that allow them to better understand customers and their needs, and are left dealing with only the most complicated and unique cases. It remains to be seen what implications this would have for the quality of service and work.
The next frontier of AI and marketing: transferring and creating knowledge
What sets AI apart from traditional statistics is its ability to execute higher-order learning, like uncovering relationships between indicators to predict the likelihood that an Internet user will click on an ad, and to do so autonomously. Being able to create knowledge like this is a huge advantage of AI. However, the transfer of knowledge from the AI model to the expert and vice versa is a major weakness of AI. Since marketing deals with human behavior, this requires a lot of common sense, which, as we now know, is not the forte of AI models. Since this kind of knowledge is often more implicit, dealing with social codes and norms, it is also harder to program into an AI model. The machine will also be able to pick up on links that it needs to transfer back to the human expert, especially so that the experts can identify flaws in the system and understand how it is operating. An AI system that is able to create and transfer knowledge back to the human expert is thus the Holy Grail of AI technology.
Takeaways
So what is a marketing manager who wants to use AI to do? There are a few key points to keep in mind:
1. Understand the purpose of implementing the AI system. What are you aiming to accomplish?
2. Identify the added value of the AI system. What does it add over and above human capabilities?
3. Understand what your AI system is doing. What data is it analyzing? How is it producing the results?
4. Examine the system for bias. Does your system have any built-in biases?
5. Communicate: ensure that relevant stakeholders (consumers, employees) have the possibility to observe and interact with the AI system, to build trust, ensure reciprocal knowledge transfer, and practice.