Multivariable predictive models are used extensively in medical research and epidemiology. They have become important tools for companies in credit card, banking and P&C/health insurance industries to identify preferred customers and improve risk selection. It is thus not surprising that the major life insurance laboratories would use their massive databases to model the chief concern of their clients, mortality.
In the late 1980s insurance labs upgraded their IT infrastructures and insisted on using Social Security numbers for identity matching. The labs are now well positioned to use the Social Security Administration’s Death Master File as a source of mortality follow-up for all the life insurance applicants in their databases, whether insured, lapsed or never issued.
The insurance labs appear to be using the two flavors of statistical modeling that lend themselves to analyzing the relationship between laboratory/examination data and mortality outcomes. These are logistic regression and Cox proportional hazard modeling. At least one lab then translates model results by hand into a mortality scoring system.
A multivariable approach allows the modeler to identify associations not seen when considering underwriting factors such as total cholesterol or systolic blood pressure in isolation. It can also identify non-linear or paradoxical relationships such as low cholesterol values and high mortality risk. These capabilities are at the core of the promise implicit in the mortality scores offered by the insurance labs.
By contrast, the knock-out criteria used by most underwriters to select preferred risks (or even sub-standard risks) generally consider factors in isolation or as simple ratios. Also, knockout thresholds assume a linear relationship to mortality (e.g., a higher value is always a worse risk). The labs’ mortality scores promise to better estimate risk (predict mortality) by taking into consideration the interactions between lab test values/exam findings and their non-linear relation to mortality.
Underwriters see the potential of multivariate predictive models. However, we must be vigilant to assure that the techniques are not only accurate and reproducible but also acceptable to our clients and their attending physicians.
In all states, insurers are charged with providing explanations for adverse underwriting decisions to the proposed insured on request. It might be debatable whether a policy issued at a preferred rate but not at the best preferred rate constitutes an adverse decision; most consumers would consider it as such. The insurer must be ready to explain characteristics about that individual that caused the decision.
Today underwriters use easily recognized and clinically acceptable parameters to make mortality risk decisions. The explanations are straightforward and understandable. Many may not be happy using a knock-out system whereby missing any criteria causes an individual to be moved to the next class, but at least the criteria are generally based upon published clinical data.
With the approach that the insurance laboratories are marketing, lab results would be entered into an algorithm and, based upon age and gender, a risk score would be generated. An underwriter would then use this score, alone or with other criteria, to determine the class for which the individual would qualify. How does a company provide an acceptable explanation based upon generally accepted clinical parameters when it cannot elaborate on the components of the score?
This issue of transparency concerns applicants, physicians and agents; it also concerns insurance regulators and legislators. Underwriting decisions must be based upon “sound actuarial principles or reasonably anticipated claims experience,” and it is not clear that the mortality scoring approach would meet either of these criteria.
Another concern is assumption of risk. When underwriters and actuaries develop underwriting criteria, they balance risk assumptions and marketing pressures. With the proposed mortality scoring system, the laboratory – whose business is selling lab tests – provides insurers with risk scores that are based on a dataset of applicants not insured lives. This is a key concern: Who is to say that such models will produce the mortality they predict? Remember, the insurer and reinsurer – not the laboratory – are taking the risk.
Mitigating the Risks
One key step is to provide clarity around the modeling process to the risk-takers: the pricing actuary responsible for profitability, the underwriter explaining adverse decisions to the field, the medical director interacting with clinicians, the corporate counsel defending company policies before regulators/judges, and the reinsurer assuming mortality risk.
Another key step in the process is checking predictions against actual outcomes. It is not surprising that a model performs well when used on its own modeling data. Validation requires predicting outcomes for a separate dataset. Currently this is a weakness in what the labs have presented, since their modeling is based on results from insurance applicants not insured lives. This new scoring approach must be validated with client data.
In scientific inquiry, publishing detailed methods, sharing data and replicating results are required before new ideas can be widely accepted. This must be the standard in the life insurance industry if predictive modeling is to be as effective as it has become elsewhere.
Promoting an open debate and validation would result in industry acceptance of predictive models and cement the role of full-blood testing and paramedical examinations in life insurance underwriting for the long term. As a reinsurer, we are committed to the advancement of underwriting practices and procedures. We look forward to working with insurance labs and clients to help validate these models and determine how best to use the associated scoring systems.
While we can look forward to the development of predictive models for life insurance mortality, we must do so with guarded optimism. Transparency of approach and understanding of risk are essential. This requires a logical and disciplined assessment of data and results.
The industry historically has pursued such an approach, using reinsurers and their resources to support these efforts. Unlike the laboratories, we share the risk with the direct insurers, so we stand ready to help our clients make an objective and informed decision as to the validity of the results.