Computerized decision support systems (DSS) help managers make decisions in situations that involve large amounts of data, uncertain outcomes or repetitive decisions – for example, when doctors choose prescriptions from a huge range of potential treatments. These are situations where the mental models used by human decision makers are inadequate to reflect a complex reality.
However, people sometimes distrust or avoid using DSS because they do not know how DSS arrive at their recommendations. For example, DSS for optimizing retail prices can dramatically outperform human retail managers, but only 5–6% of retail companies use them. The situation is similar in other sectors. Because managers perceive a gap between their own mental models and the DSS’s decision model, they do not accept the DSS. And if their own decision conflicts with that of the DSS, they usually go with their own – even though the DSS’s performance is known to be superior.
In Professor Arnaud de Bruyn's research, he and his colleagues wanted to find out if DSS would be more valued if decision makers understood the rationale behind DSS decisions – that is, if their mental models were more closely aligned with the DSS’s decision model.
Ideally, the decision maker would undergo deep learning: an enduring change in their mental model through the acquisition of new knowledge. Deep learning is more likely when people are required to make an effort to change their mental models, and also given guidance on how to do so.
A two-pronged approach
Their belief, and the starting point for this research, was that DSS users would achieve deep learning if the DSS provided two types of feedback: feedback on the upside potential (i.e. the benefits of adopting the DSS’s model) and corrective feedback on how the decision maker’s mental model should be corrected.
Feedback on upside potential motivates the decision maker to put in the effort to improve, but may not show them how to do it. Corrective feedback shows what needs to change, but may allow decision makers to go along with suggestions without really understanding them. So either type of feedback in isolation leads to shallow learning. However, both types of feedback together generates the motivation to change and also gives it direction – so deep learning takes place.
The hypotheses to be tested were:
• Deep learning leads to more favourable evaluations of DSS.
• Effort plus guidance promotes deep learning.
• Increased effort without guidance does not lead to deep learning.
• Increased guidance without effort does not lead to deep learning.
They modelled the interaction between effort, guidance, deep learning and DSS evaluation with a statistical model, allowing them to test our hypotheses empirically. As a testing arena, they chose charities seeking donations by direct marketing. These organizations meet all three criteria that might suggest the use of DSS: they have very large databases of past donors, the outcome of any individual donation request is uncertain, and they conduct frequent campaigns that involve repeated similar decisions.
Their study participants played the role of direct marketing managers in a large charity helping those affected by natural disasters. Some participants were MBA students with direct marketing experience, and some really were charity marketing managers.
To ensure they could give immediate, reliable feedback (difficult to achieve in the real world), we did our testing in controlled experimental conditions, using a frequently occurring, realistic decision problem. Decision makers had to use a DSS to select potential high-value donors from a database of 200,000 fictional individuals based on four factors: recency of last donation, frequency of donation, amount of past donations and donor age.
Each donor’s probability of donating (or ‘attractiveness’) was calculated with a formula based on these factors. Participants did not know the formula, but were told that donors were more likely to donate if they (1) had donated more recently, (2) had donated more frequently in the past five yeas, (3) had donated greater amounts, and (4) were older.
With performance-related pay as their incentive, participants were asked to rate the attractiveness of 20 donors on a scale of 0 to 100. Based on the ratings they had submitted, the DSS then provided participants with a prediction of the likely performance of the campaign in terms of revenue generated. In some cases, the DSS also provided upside potential feedback, by revealing the maximum revenue that could have been achieved if the participant had rated the donors with 100% accuracy. In other cases, the DSS provided corrective feedback by telling the participant when they were under- or over-weighting particular factors. And in a final set of cases, the DSS provided both types of feedback together. The participants then repeated the ratings exercise with the same set of 20 donors, ten times, aiming to improve their performance.
Unobtrusively, they also used participants’ answers to calculate the weighting they assigned to each of the four factors. This revealed the mental models they were using for their decisions, and allowed them to assess whether those models were moving closer to the DSS’s decision model over time. Then, to determine whether deep learning had taken place, they asked participants to rate a different set of 20 donors from the same database. If their performance was no better than it had been at the start of the experiment, it would indicate that any learning they had done was shallow and temporary.
They also gave participants a survey covering their subjective assessment of the effort they had put in, the usefulness of the guidance received from the DSS and their confidence in their own decisions.
The results provided strong support for all the hypotheses, confirming their premise that people will resist using DSS unless they are designed to help them understand the basis for recommendations, and also how following those recommendations will lead to better performance. They also confirmed that deep learning is essential for managers to form a favourable evaluation of a high-quality DSS. Their research should help DSS designers to increase the likelihood of their systems being used, and also help firms to get more value and improved performance from their investments in DSS.
"How Incorporating Feedback Mechanisms in a DSS Affects DSS Evaluations", published in Information Systems Research.