Towards machine pricing

Pietro Parodi looks at the development of machine learning, and the impact on pricing

Towards machine pricing

Back in 2009, I completed a research project for the actuarial profession on the applications of machine learning and more generally of artificial intelligence (AI) to general insurance. A brief summary of that work appeared in The Actuary in March 2011 under the title From artificial fish to underwriters. Since then, I am happy to report that the ‘actuary of the future’ piece in this magazine still features human beings and not androids. Everything else that could have happened in relation to automation, however, has started happening:

a) Machine learning has become a household name among actuaries (and almost everyone else), and techniques such as the lasso or elastic net regression are not esoteric names anymore
b) The notion of big data has come to the fore and
its use is the main reason why machine learning
has become much more efficient, for example in speech recognition
c) The use of new business models based on digital technology and big data (‘InsurTech’) promises to disrupt the insurance industry
d) Deep learning (a machine learning technique based on many-layered artificial neural networks) has achieved superhuman ability in a variety of domain-specific tasks, from face recognition to the identification of tumours from radiological images, and is now regularly applied to insurance problems such as fraud recognition.

It really looks like, after so many ‘AI winters’ – those periods in history where funding for AI dried up in the wake of crushed expectations – we are going to have a spring that none of us can afford to dismiss.

Machine learning as a theory of modelling
The main contention of that research project – and one that has not dated – was that machine learning is not just another powerful technique that actuaries should learn. Rather, it is the only rigorous theory available on how to build models with predictive powers, whether in data-rich situations (personal lines pricing) or in sparse-data situations (London Market). The problem of data-driven risk costing (the basis of much pricing, reserving and capital modelling) is an example of supervised learning – the problem of learning the features of a model based on a sample of inputs (for example, rating factors, or the parameters in a severity model) and outputs (for example claims amount), with the objective of minimising the expected prediction error.

Therefore, actuaries should learn machine learning not only to be hip and to be well-equipped for the onslaught of big data – but because it brings clarity of thought and the right attitude to how they go about their daily job (for example, building and calibrating pricing models and deciding when to use benchmarks). Machine learning will give you mechanisms on how to optimise the complexity of your models, resisting the push towards more complicated and supposedly more ‘realistic’ models (see Figure 1, describing the famous bias-variance tradeoff).

Figure 1: A model should be as complex as necessary – but not more complex

Predictions(1)

 

A pathway to automation
As intellectually satisfying as this is, the prize for the adopters of machine learning and artificial intelligence in pricing is not purely theoretical – insurers’ CEOs are not (always) interested in theories of modelling. Also, while AI may provide valuable tools for fraud detection and data mining, the competitive advantage of using slightly more accurate costing is likely to be limited. The big prize is that machine learning provides a pathway towards pricing (and reserving, and capital modelling) automation.

The simplest and best-known example is possibly that of rating factors selection. This has never been anything but a machine learning problem. The industry standard – generalised linear modelling augmented with a mechanism to select the right factors – is in itself a well-known supervised learning technique. A low-hanging fruit for machine learning – well underway – is enhancing the existing industry standard with techniques such as lasso regression and cross-validation as a means to select the model with a minimum expected prediction error in a fully automated and efficient fashion. So much is available off the shelf (elastic net, kernel methods, support vector machines…) to keep us busy for years.

It may well be that the distinction between all these methodologies will soon become as tedious as the distinction between different goodness-of-fit metrics.
A less obvious candidate for automation and machine learning applications is individual contract pricing in commercial lines (or treaty reinsurance). The standard process for this is a patchwork of tasks that are completely algorithmic (for example a Monte Carlo simulation to produce an aggregate loss distribution) and tasks where judgment is required (data checking and preparation, picking suitable frequency/severity models). A possible pathway towards automation is to re-engineer the process so those areas that require judgment are isolated, and a clear protocol to deal with these areas using available AI techniques is developed. A couple of examples:

  • Data exploration and preparation can benefit from the use of rule-based systems (rudimentary decision systems based on simple fixed rules), natural language processing (an umbrella term for various statistical machine learning algorithms aimed at extracting information from text) and data mining.
  • AI provides the natural conceptual framework for automating the selection of frequency/severity models and deciding when to resort to portfolio/market data. Where data is scarce, model selection cannot be purely data-driven, but the selection must also be informed by theoretical results (for example, using extreme value theory for large losses).

Of course, full automation would not happen in one go, but in an iterative and piecemeal fashion, as is the case for driverless cars.

The advantages of machine pricing
The advantages of pricing automation would be similar to automation in other fields, but with some specific twists.
1. Machines increase the number of actuarial investigations that can be performed. Since they don’t get tired, they can price as many deals as we want, to the desired level of detail, without needing to prioritise important work and they don’t become intractable if they receive updated information one day before the deadline.
2. Machines can improve portfolio management greatly. Background bots (pieces of software that perform tasks on behalf of a user) can maintain the claims database and the portfolio data, update pricing models and portfolio benchmarks (including exposure curves) continually. They can ensure that the benchmark curves maintain their relevance, rather than sticking to the same exposure curves as people used in the 60s because it would be too onerous to embark on regular reviews. They can ensure that the optimal number of different benchmark curves is used, as more claims experience accumulates and it becomes possible to differentiate risks more and more.
3. Machines would be able to price contracts neutrally, without a cognitive bias. Neutrality is important – human underwriters and pricing actuaries may be able to incorporate special knowledge and wisdom in specific transactions but they will not be able to guarantee unbiasedness at a portfolio level. A pricing machine may price incorrectly but will be even-handed – and its neutrality at portfolio level can also be checked and monitored by actual vs expected analysis. The underwriters or other officers will still have the opportunity to override the machine price but this will be documented and the portfolio effects of underwriting adjustments can be isolated and monitored.

Yes, but what about actuarial judgment?
Automation makes everything more efficient where it can be applied, but surely we still need sound actuarial judgment. Or do we?

It can be argued that at the basis of judgment are experience and the knowledge of the answer to many similar cases looked at in the past, that type of ‘hunch’ that immediately tells you that a particular price or parameter is a bit off. A few things can be said about this type of judgment:
a) If defined as above, judgment is ominously similar to deep learning – you train an artificial neural network on a number of relevant cases and this comes up with a strategy, which can’t be articulated but gives good results;
b) This type of supposedly exclusively human judgment has been invoked several times in history – most famously to explain why chess software could not possibly beat the very best humans at the game, because humans would have an intuition about positions while a machine could only look ahead a number of moves. Over and over again, however, machines have proved themselves to be better at judgment in domain-specific contexts;
c) This judgment is not always correct, especially where past experience is not that large – that will be true for both human and AI judgment.

So is the end nigh for actuaries?
If AI is so great that it may eventually replace judgment, will our jobs still be there in 20 years (the canonical timeframe for safe prediction)?

Some of the recent anxiety about jobs can be traced back to the oft-quoted paper by Frey and Osborne (2013), The future of employment: how susceptible are jobs to computerisation?, which estimates the probability of various jobs to be replaced by machines. The paper found that 47% of jobs were at high risk (60% or more) of computerisation, and put that probability for insurance underwriters at a staggering 99%. Although this datum can be put to good use for actuary vs underwriter bantering, it probably says more about the limitations of the paper’s methodology and assumptions than it does about the underwriters themselves. Specifically, the methodology fails to capture the heterogeneity of tasks performed in a specific occupation, only some of which can be automated. A subsequent study (Arntz et al. (2016), The risk of automation for jobs in OECD countries) has refined the approach and put the percentage of jobs at high risk of computerisation at a more modest 9%. Nothing specifically was mentioned about underwriters (or any other jobs) but by using their task-based approach it is clear that only a small part of an underwriter’s tasks would be amenable to automation, while most could not. The situation is not dissimilar for pricing actuaries.

Specific tasks will increasingly be automated – as has been the case for decades – but this will redefine actuarial jobs rather than wipe them out. The ‘lump of labour fallacy’ – that is, the idea that there is a finite amount of work to go around and automating it will reduce the need for people – applies to actuaries as well as to the labour market in general. The advent of desktop computers, spreadsheets, and programming languages hasn’t reduced the need for actuaries, but has increased dramatically the number of things that actuaries are asked to look at. This trend is likely to continue: AI techniques will make it cheaper to run actuarial investigations and therefore the demand for them is likely to increase. The actuary is in a privileged position to create and harness the technology and piece everything together.

All this unless, of course, automation becomes so good that it can do the harnessing, the piecing-together and even the theory-building that professionals do. This, however, is far-fetched. Replacing actuaries altogether is an AI-complete problem – that is, a problem equivalent to creating general intelligence. Despite some concerns that a hostile AI may soon take over the world, we’re not remotely close to the creation of a general intelligence that has consciousness and willpower and the pathway towards it is unclear.

I may not have the best track record for making predictions on artificial intelligence, though. When I was working at the University of Toronto I used to shake my head in disapproval on seeing my fellow post-docs wasting their best years in Geoff Hinton’s artificial neural networks lab. I, for one, was engaged in much more serious and promising theoretical research on the computational complexity of certain tasks in machine vision, building up on my PhD work. Fast forward a couple of decades: Geoff Hinton is now head of AI at Google. My PhD supervisor has long had enough of this whole machine vision business and has gone back to neuroscience. As for myself… well, I’ve become a pricing actuary, so I can’t complain at all, can I?

Pietro Parodi has 10+ years of experience as a pricing actuary and is author of Pricing in General Insurance (CRC Press, 2014). He will be speaking at the IFoA’s giro conference, on 20-23 September

Most popular

  1. Blockchain identified as bridge to younger generation for insurers

    Harnessing the power of blockchain technology could transform the way protection insurers create products, making them more relevant and appealing to younger people.

     

    Wednesday 13

    13 December 2017

  2. SME insurance market set for digital transformation as more firms buy online

    Small firms are increasingly looking to interact with their insurers online, suggesting the market is set to experience a similar digital transformation to the one seen for personal insurance.

     

    Monday 4

    04 December 2017

  3. Loss of brand reputation fuelling growth in cyber insurance market

    The global cyber insurance market was valued at $3.4bn (£2.5bn) last year, but is set to more than quadruple to $16.9bn by 2023, according to a report from P&S Market Research.

     

    Wednesday 29

    29 November 2017

White paper

  • Quarterly InsurTech Briefing Q1 2017

    Why InsurTech? A Pressured Insurance Value Chain

    By Andrew Sagon, Andrew Johnston and Matthew Wong

    InsurTech is a burgeoning phenomenon that is modernising the insurance industry. It is disrupting the traditional value chain whereby insurers offer loss protection, and shifting the emphasis to risk mitigation. Incumbents face disintermediation as investors in search of higher yields pour money into insurance-linked instruments in the capital markets. And entrepreneurial businesses are targeting friction costs and inefficiencies within every aspect of the traditional value chain.

     

     

  • Insurance big data – float like a butterfly, sting like a bee

    Nimbleness and agility will unlock potential

    By Elinor Friedman, Andrew Harley and Klayton Southwood

    Recent Willis Towers Watson surveys in the U.S. have shown that P&C and life insurers in developed markets are taking seriously the potential of big data and predictive analytics to improve their businesses. Nimbleness and agility, rather than brute force, are likely to be key to realizing that potential.

    Download PDF

  • The new era of insurance analytics

    Driven by technology, toolkits and talent

    By Claudine Modlin and Graham Wright

    Advanced analytics is helping some insurers offer innovative products and solutions. What do insurers need to know about the changing nature of analytics and whether it is worth the investment? Claudine Modlin and Graham Wright discuss technology, toolkits and talent — topics that may help you decide.

    Download PDF

  • How can we manage the dynamic nature of cyber-risk?

    Risk transfer is part of a comprehensive solution

    By Adeola Adele, Patrick Kulesa, Kevin Madigan and Alice Underwood

    Given the dynamic nature of cyber-risk, taking a multidimensional approach that integrates board governance, technology solutions, behavioral change and risk transfer solutions can help reduce risk to a manageable level.

    Whitepaper Form