US Mortality Improvement: Analysis Methodology
December  2015

Actuaries incorporate mortality improvement assumptions into their calculations for a variety of product lines. The notion that continued medical advances and lifestyle improvements will reduce future mortality rates has significant implications for life insurance, annuities, pensions, medical and long-term care insurance products.

Estimating mortality improvement, however, can be challenging. In this article I briefly introduce the most common industry approach and then discuss how companies may estimate mortality improvement for their target market a little more easily by presenting modeling alternatives.

The Industry Standard
Actuaries have forecasted mortality improvement in their pricing methodologies, though it remains an inexact science. For our industry, the Lee-Carter approach1 to estimating mortality improvement remains a widely used and accepted standard. It is very robust, though like many of our best-in-class models, it can be a little complex for non-practitioners.

Alternative Models
For actuaries, the complexity of the Lee-Carter method is reasonable, especially in light of the results it can produce. However, some companies may not have use for the intricacies of Lee-Cater or may find that explaining the model’s dynamics to non-statisticians a bit challenging. As an alternative, I suggest a couple of methods that balance simplicity with effectiveness.

Both methods use actual observed annual mortality improvement rates or estimated improvement rates through regression techniques. In our examples, I use United States population data from the Human Mortality Database2. Each method has its advantages and disadvantages (Figure 1).

mortality improvement estimating
Using actual population mortality experience, actuaries can calculate year-by-year improvement rates and average these values over the time period under consideration. The advantage of this method is that mortality improvement standard deviations can also be calculated – a vital component to determine confidence intervals. The disadvantage of this method is that, in using raw mortality rates, it makes no attempt to smooth or trend the data.

Alternatively, actuaries may create a regression model from the historical data and use modeled mortality rates to calculate an implied improvement rate. The advantage of this approach is that the impact of anomalous values (or outliers) is minimized and thus may represent a better view of mortality trends. However, once the underlying mortality rates have been fitted to a model, the resulting smoothed mortality rates cannot be used to calculate a standard deviation for the dataset.

In theory, a log-linear regression model of the qx rates should produce results that are consistent with the methodology used to project mortality rates into the future. In practice, a simpler linear model produces nearly identical results. Figure 2 compares annualized improvement rates from linear and log-linear regression models for the US Population Males 1950-2007.

mortality improvement modeling 

To one decimal place, the linear and log-linear modeled rates are almost identical, so either model may be used to produce satisfactory results.

Combining Methods
Two sets of annualized improvement rates – one calculated by a simple average of raw values and the other by a regression model – may differ by only several hundredths or tenths of a percentage point. However, I want to use the most appropriate mean (i.e., from a regression model), but I will also need a standard deviation to produce confidence intervals.

Note that simply changing the mean of a series of values by subtracting a constant amount from each value does not change the standard deviation of the series. Therefore, I can safely use the raw data’s standard deviation combined with our regression model’s mean to capture the best of both worlds.

Central Limit Theorem
The resultant standard deviation represents the fluctuation in annual improvement rates. In projecting future mortality, I typically need to calculate the fluctuation in the long-term mean of improvement rates. I use the central limit theorem to determine the standard deviation (stdev) of the mean of our historical sample dataset from the standard deviation of the annual improvement rates:

stdev (n-year average annual imp rate) =

stdev (dataset 1-year annual imp rate) / v(n)

Figure 3 illustrates our projection method. The historical period is 1950-2007 and the projection period is 2008-2050.

Figure 3 - Long-Term Mortality Improvement Trends
long-term mortality improvement trends

I have calculated the historical mean improvement rate from a log-linear model of the qx’s. Our best-estimate mortality projection uses the regression model to forecast rates into the future. To calculate a confidence interval around this best estimate, I first determine the standard deviation of the mean improvement rate from the normalized standard deviation of the 1950-2007 sample dataset by dividing by the square root of 57 (the observed years of annual improvement).

Multiplying the standard deviation of the mean by ±1.96 produces 95% confidence limits around our best estimate improvement rate. I may then project mortality by the following iterative formula:

qx+t = (qx+t-1) * (1- imp rate)

Choosing Appropriate Historical Periods
In determining average historical improvement rates, I should use only the most appropriate time periods from the available dataset. I begin by looking at the pattern of mortality rates since 1950 by age group and gender. A year in which a significant and permanent change occurred in the pattern for a specific age group and gender may be used to censor the data prior to that year.

However, I always should ensure that I am using a reasonable number of data points. For example, a change that occurred in 2003 would not normally warrant excluding prior data without a sufficiently strong rationale.

For males 45-49, a significant and seemingly permanent change in the pattern of mortality occurred around 1982 (Figure 4). Thus, I have created a linear regression model for the raw qx.

Figure 4 - Identifying Permanent Change in Trends
Identifying Permanent Change in Mortality Trends

For comparison purposes, a regression that included all data points from 1970 to 2007 would have produced an average improvement rate of 1.6% as opposed to the 0.8% rate I actually used. Using the data back to 1970 would appear to overstate the degree of improvement that was “reset” in 1982.

As advances in the medical field, lifestyle changes and general health improvement continue to develop among the population, accurately estimating mortality improvement will become increasingly important for modeling life insurance business.

More readily available and usable data allows life insurers to build their own mortality projections in-house. In this article, I have presented a simple example of one such approach, combining the best of actual improvement rates with a regression model.

SCOR has dedicated full-time resources to developing and evaluating different approaches to making the best modeling decisions. We continue to work with clients to help them understand their business more fully and extract the greatest value from the data they possess. If you would like to discuss your company’s approach to incorporating mortality improvement into your models more effectively, please contact me.

1 For more information on Lee-Carter, see
2 The Human Mortality Database. U.S.A. Deaths and Exposure-to-risk 1933-2007.