Marie Kratz, Professor at ESSEC Business School and Director of ESSEC CREAR (Center of Research in Econo-finance and Actuarial sciences on Risk), shares her thoughts about how artificial intelligence will re-shape the actuarial profession.
___
We’ve all experienced those minor, sometimes major, mishaps in life – your bike gets stolen, we have a car accident, you drop your computer or, worse still, you wake up one morning to find the living room swamped in 3 feet of muddy floodwater. And we have all witnessed the drama of natural catastrophe on the news. The camera covers the victims, some considering themselves lucky, others overcome by the emotion of losing their hard-earned belongings or loved ones – all of them in wait for the complications that will come, not least the financial ones.
This is where the insurance company comes in. And one of the major stakes in the service-client relationship, too. For generally, the insurance industry has a bad reputation for slowness in following up claims, errors in damage assessment and amounts paid – with the result that one of the key customer-retention factors today is the capability to get close to clients and offer a personalized service. Artificial intelligence, as it were, at the service of emotional intelligence.
Little sister rather than big brother
This is where Artificial Intelligence (AI) can help. Prof. Marie Kratz believes it will play a fundamental role in those critical moments the victim may face – a rapid analysis of the situation and therefore an effective calculation of premiums and drafting of contracts. But that is not all. Kratz asserts that using AI will largely facilitate the task of analyzing accidents, disasters and the damage caused by them. Indeed, insurers for crops in the US are already using artificial intelligence to analyze exposure to risk before contracts are signed as well as the amounts to pay out when farmers make a claim. The same approach, using satellite images, can be used to assess flood damage, thereby ensuring precision when calculating the damage to lands, crops and assets.
But satellites aren’t all. Artificial intelligence can be used on a closer scale to home with Allianz, a leading insurance group, deciding to go ahead with a project to determine the amount of damages due from the photographs that will be taken on the site of the car accident. The Swiss company, AXA Winterthur, already proposes its younger policyholders a reduction in premiums if they accept to install a black box in their car that will faithfully record their driving habits, the data collected being used to better fix the price for the insurance protection offered. RiskLab-ETH of Zurich has undertaken a data science in insurance pricing research project on the subject on behalf of AXA Winterthur.
Other insurance companies must implement such methods, states Marie Kratz – at every level of the organization in its ties with its clients. The giants – AXA and Allianz – have already begun to invest heavily in the field of AI-assisted services in what she describes as one of the major stakes for insurance companies in the coming years.
Beware of the giants
The internet juggernauts such as Google, Apple, Facebook and Amazon constitute potential competitors to the insurance companies in the race to use artificial intelligence, particularly in the development of networks of similar profiles for car insurance, for example. Although there is little evidence to suggest that the giants have taken the road to car insurance using AI, the sheer size of their databases may easily allow them to do so, just as they have enabled them to use big data to determine user profiles and display targeted products from anything from clothes, dating agencies and funeral arrangements. It is only by developing added value in terms of customer care and services, adds Prof. Kratz, that the insurance companies can survive and develop.
Trouble at t’ mill
Will there still be a place for humans? The question has always been on the mind – and in the remonstrations – of mankind when faced with technological revolution. Witness the Luddites in early industrial Britain who destroyed factories and machinery in protest, many of whom were transported for life or hanged as a result. It is unlikely that the armies of insurance men will face either such a loss of employment or, thankfully, reprimand in the face of artificial intelligence. Prof. Kratz points out that human-machine interfaces will develop as much internally – regarding management – as externally, in terms of its customers. This would imply large-scale training initiatives for insurance company personnel with only repetitive, low value-added posts such as accounting and monitoring being done away with, not – it is worth noting – areas such as data interpreting and results.
This said, there have been no real studies that provide exact figures for loss of jobs relating to the introduction of AI in insurance, though it is certain that some will have to go. However, while some will disappear, others will change and others still will be created. Indeed, at higher and vocational education-level, diploma and degree programs for actuaries have already begun to include the data science field in their curriculums, although there is still some confusion over what distinction there is between data scientists and statistician – a sure sign that the field is in a period of transition. Prof. Kratz’s feeling is that while the definition and expectations attributed to professions within the insurance field will drastically change in the coming years, there will be no great reduction in jobs – in insurance or other service industries – while the added value of human work will be increasingly sought after. Her thoughts are in line with a recent OECD study that concluded that 9% of the jobs are at risk, while 25 to 50% will have to change in their profile[1].
In robots we trust – humans are another question
One of the aspects of using artificial intelligence that causes concern is that of cyber criminality. It already exists, but the threat will grow with the increase in use of computers and robots. For Prof. Kratz, the threat to reckon with is more a question of how management and decision-makers – with little perspective or understanding of the tool – will use it as an all-powerful black box to run their company and its offer. In that light, training initiatives for management must include the scientific and technical dimensions for the products or services they sell.
Insurance scams have also existed and will do in the future – although under a different form. However, Marie Kratz asserts that artificial intelligence also enables insurance fraud to be detected more easily. For banks, the use of AI tools has already made inroads in combatting credit card fraud with the effects rippling on to the insurance companies who now consider cyber-risk as something to assure yourself against. Here, too, several research projects have seen the light, including the NTU Singapore CyRIM project with the insurance industry that was spurred by the MAS (Monetary Authority Supervision – Singapore), or Cambridge-Lloyd’s who have managed to model such a risk.
Robots and men: shared imperfection
Risk can also come from the robot itself. There is no zero risk of error – especially as the tools are made by humankind in the first place. However, the obvious risk is that, badly-programmed, a machine might generate information and issues faster than its human controller could contain them. Machines can, just like humans, make mistakes in data interpretation too, falsely generalizing results. This means that while the results obtained correctly describe the data – and indeed may help in analyzing a past or present time – they do not hold any predictive value.
Indeed, that is a whole issue in itself – how to correctly foresee the future. Something, moreover, that neither man nor machine can yet do with any great accuracy. In artificial intelligence, one way to attempt to master this challenge is introducing random noise into the data to check the solidness of results and avoid them being ‘too perfect’ and too far-removed from the fast-moving – and sometimes changeable – behaviors of humans. Indeed, states Prof. Marie Kratz, one of the biggest problems in predicting behaviors is the fact that they modify themselves in accordance with changes in the environment to produce a high reversal of conduct or performance. It’s a delicate thing to grasp in any circumstance – for both men and machines.
[1] OECD Policy Brief on the Future of Work available at: www.oecd.org/employment/future-of.work.htm