SCORviews hero image

Mortality Studies: A Critical Tool in the Underwriter's Toolbox
August  2019

Achieving mortality outcomes commensurate with assumptions remains as critical as it was before and a basic understanding of how mortality is measured, predicted and influenced always was a desirable skill.

Now, however, it is vital, no longer a ‘nice to have’ addition. All too often, in the past, underwriters have been able to defer to their actuarial colleagues when discussing the details of mortality studies. I suggest those times have come and gone.

The good news, on the other hand, is that there is no reason why underwriters cannot add that tool to their toolbox. The basic concepts involve simple probability and basic algebra. While more sophisticated knowledge may be required to compile and conduct a mortality study, interpreting and using its results is possible with much less preparation.

Understanding actual/expected experience
The space of this article is not sufficient to explain the exact how-to process but finding a contact in one’s actuarial department may be an excellent start. I myself have been fortunate over the course of my career that innumerable colleagues took their time to explain their work to me, and I am very grateful to all of them. In many instances that personal teaching is far better than learning from books.

Instead let me lay out in more detail the ‘why’ it is time that most underwriters brush up their knowledge regarding mortality studies. When I use the term mortality studies, I usually refer to traditional life insurance tabular based actual/expected approaches, but certainly not exclusively. Mortality can be measured many different ways, and understanding one approach makes the path to another system much easier.

Here are some reasons why:
  • Underwriters routinely communicate class and rating decisions upstream and downstream. Understanding the underpinnings of those decisions makes for much better communication. 
  • Absent credible historical experience we often must resort to external studies and proxy data to design selection approaches. This is quite impossible unless one knows how to pick appropriate studies and data for any given task.
  • Evidence-based predictive models routinely outperform subject matter expert (SME) rule-based systems (of the past). Understanding why that is so and being able to explain it to others is critical to support the change processes.
  • Margins between success and failure are small. That part has not changed, but the inputs have changed. What may seem to be trivial differences to the uninformed may be vital components to a new selection process.
  • Interaction – the fact that most factors used to categorize and predict mortality outcomes overlap. This is nearly impossible to assess without good data and even more impossible to explain without a solid understanding of the dynamics.
  • Different predictive models don’t agree when applied on the applicant level. How to decide which of them are ‘right’ or appropriate requires detailed understanding of how mortality is predicted by such models.
Of all the possible sources of confusion, the last two – interaction and lack of agreement between scores – are maybe the biggest current stumbling blocks, and only a more detailed understanding of the drivers and dynamics of mortality measurement (and prediction) can help to explain. So, let’s dive a little deeper.

The challenge of too few deaths
Mortality is a rare event – for any given year that a policy is in force, the probability of survival is hundreds to thousands of times more likely than death. However, mathematical credibility of mortality measurements is directly related to the number of (countable) deaths. A good rule of thumb is that in order to draw any significant conclusions about mortality patterns, every class that is examined should have at least 50 documented deaths in it.

Those two conditions, rare event and minimum number of deaths, require that we aggregate many applicants/policyholders into a cohort to be studied. We would like for those applicants to be ‘alike’ on as many parameters as possible, e.g., within a certain age range, all male or all female, all within the same tobacco class and so on.

Once we add to those conditions more biometric measures (build, blood pressure, lipid levels, etc.), it becomes quickly clear that the task of creating large enough groups to be credible but still be ‘alike’ is getting harder and harder. Additionally, considering non-biometric factors all but makes it impossible to create groups that truly are ‘alike’ in all aspects.

Different ways to predict mortality
Now take the common scenario of having two mortality prediction scores that are built using very different inputs. One uses more traditional biometric measures, while the other one uses credit style and behavior elements. Both are well built and credible in their own right, but both aggregate completely different applicants they consider ‘alike’. One aggregates applicants of favorable build, blood pressure, lipids, health history into the best class, while the other aggregates applicants with favorable incomes, assets, payment records and driving history.

It is not only possible but likely that any number of applicants will receive an ‘excellent’ score from one model and a ‘reject’ from the other.

These predictions only achieve credibility once they get rolled up into larger groups and without significant additional work. The models are not designed to be casually mixed and deployed, irrespective of their credibility. The owners and creators of the various prediction approaches will tend to emphasize the features of their model without talking too much about the possible shortcomings. It is frequently the underwriting department that is asked to decide about deploying one or the other or both approaches and in what combinations.

Without understanding many of the features of mortality measurement, it may be impossible to design effective selection approaches incorporating many of the tools that are made available each day.

While the task may not fall exclusively on the underwriter’s shoulders, he/she certainly needs to be at the table and be able to participate fully in the decision-making process. Without the necessary background knowledge that may be much more difficult.

With every period of change comes challenges, but always also great opportunity. My advice to all underwriters is that a small investment in updating some fairly basic knowledge can pay significant dividends. And who knows – you may make some
friends amongst your actuarial colleagues in the process.