Leading Insurance Talent Joins Atidot’s Executive Team

Martin is a member of the Big Data Task Force of the American Academy of Actuaries and led the development of the industry’s first ULSG priced with Principle-Based Reserves, as well as the conversion of the pricing models to a new platform.

Martin’s CDO appointment complements the founding team of data scientists and actuaries, including the former Chief Actuary at the Israel Ministry of Finance. The expanding team is working with leading insurance providers to enable them to take control of their existing data to strengthen policyholder retention, sales, and in-force management, driving both top-line and bottom-line growth.

The Israel-based startup caters to the unique requirements of its customers and harnesses advanced artificial intelligence, machine learning, and predictive analytics to enable life insurers and annuity writers to make data-driven business decisions. Atidot focuses specifically on the life insurance industry (valued at $597 billion in the US alone), offering insurers an easy-to-use and secure SaaS predictive analytics platform. The company utilizes underused and often neglected sources of data as well as open access information to enhance existing business models.

How exactly will insurers benefit from the Atidot platform? Martin sees it this way: “Life insurers and annuity writers can develop new strategies for their in-force management and new business activities through the insights generated by predictive analytics.”

Martin is a frequent speaker at Society of Actuaries meetings. Most recently, he spoke at the Society of Actuaries Life and Annuity Symposium in Baltimore, Maryland, on May 7 and 8. Martin moderated and presented at Session 36 on the Risk Management Process in Product Development” and led a workshop at Session 81 on How AI is being used in Distribution, Product Development, Pricing, and Underwriting.

Keep an eye out for future Atidot executive spottings – coming soon!

Analysis of Statutory Annual Statements (Part 1)

At Atidot we put a lot of effort into collecting as much data as we can in order to improve our modelling and understanding of the life insurance industry. Our Data Scientists love this approach – it enables us to use hard numbers to support our analyses. One data source that we've been wanting to examine are US life insurers' Statutory Annual Statements. This blog post summarizes a quick research project we just completed to extract and analyze data from these Statements.


What's in a Statutory Annual Statement?

The Statutory Annual Statement contains a wealth of financial and insurance information about an insurer, including, for example, Premiums collected, Reserves, Cash Flows, Reinsurance, etc. From a data perspective, we like to compare it to a thorough "financial blood-test", measuring the vitals and health of a company, shedding light on how it operates, and in some cases – why they take certain actions. For us, this is invaluable – data like this strengthens the calibration of our algorithms, a key step in our journey to further develop the sophisticated Atidot brain that understands and interprets the life insurance industry.

A Google search for statutory annual statements yields some links with downloadable PDF reports, for example:

Example page from a report


PDF Hell

PDF's are great for transmitting documents and other information electronically. But try to convert the pdf into a format in which you can use the numbers – well that's difficult to say the least. Our first step was extracting all the tables we needed of all companies and all years from document files (.PDF) into tabular (.CSV) files. We needed to keep all the numbers in the right order.
This proved to be extremely challenging. We played with the idea of doing this manually, but abandoned the idea pretty quickly. We realized that we needed to develop a fully automatic solution.
One challenge we faced was that in most reports, numbers were encoded with inline PDF Custom Fontsand the standard tools of the trade (e.g. pdftotext) couldn't handle that directly.
We designed several solutions and realized that we need a solution for all extractions of tables with numbers from PDFs and images. This is when we added Image Processing and OCR (Optical Character Recognition) to the mix.

We combined several powerful libraries and tools:

To build an analytics pipeline that: a) cleans the image from small artifacts and noise b) identifies table cells, rows and columns c) does OCR in cells

Online example (using PDF.js + OpenCV.js)


Advertising Efficacy

With the time we had left we decided to do a quick study of the effects of Advertising on "New Business". Measuring the effectiveness of Advertising is tricky and there are numeruous ways to define it, let alone whether is it good enough or not. Accepting that there are no silver-bullets here, we developed a working definition for this blog post that:

  • Is easy enough to understand in this context
  • Uses data from several tables
  • Incorporates business acumen (e.g. "real value" of First Year premiums)
  • Takes the lagging effects of Advertising into account

Illustrated using relevant tables -

The following chart shows original values from Statements of 5.2 - Advertising expenses - Life for several companies during the years 2015-2017.


While this chart shows our calculated ratio -
(Notably, the Colonial Penn Life Insurance Company stands out for consistently spending on Advertising as much as First Year premiums "value" (as per our normalized definition from above)


Conclusion

We set out to analyze Statements with modern Data Science tools, but as we only had 4 weeks for this work, it became clear very early on that we would update our objective and first improve our PDF data extraction capabilities. We're very happy with the results - we're now able to extract virtually any table from a PDF or image format.
Of course, we haven't forgotten our initial goal - we still care about the data and what it says. In a future post we are going to test some advanced Machine Learning techniques (like Deep-Neural-Network) and share the insights we develop.
If you find this interesting and want to learn more, please contact us at: info@atidot.com

Transform Your Business With Predictive Analytics

Predictive analytics and artificial intelligence (AI) are the most transformative developments in the history of life and annuity products, with early adapters poised to achieve major strategic advantages.

Predictive analytics can enable us to understand better the complex causal relationships that affect the performance of our business in real-time, thereby enabling exponential strategic advantages. In other words, we have the predictive insights in time to act on them, enabling the business to be proactive rather than reactive.

Personalization in the life insurance industry

I own a life insurance policy from one of the largest life insurers. The closest they have come to recommend a product is to send a list of all the products they offer and suggest that I spend time discussing my needs with the agent.

I also own automobile and homeowners insurance from one of the largest P&C writers. Every few years, they recommended that I buy $100,000 of life insurance. But this is not personalized! In the 21st Century, people expect smart services. The likes of Amazon and Google have learned to maximize engagement with their platforms by interacting with their user bases dynamically. What will it take for our industry to catch up to those born in the digital age?

Perhaps experts have studied the issue and determined that additional predictive analytics is unable to help our industry. However, I believe this is simply not true.

Suppose that Company A has poor lapse experience and wants to determine what it can do to improve its persistence. They can call everyone who lapses, find out their issues, and try to convince them to reinstate. At best, this would be post-hoc and expensive.

They could call in-force policyholders instead before lapsation happens, but it would be hit or miss on whether they are calling customers at risk, and hence an expensive proposition with dubious results. Worse yet, some policyholders who otherwise would not have lapsed may get the idea from these calls to lapse their policies. So, how can predictive analytics help Company A improve its retention?

Knowledge is power

Predictive analytics – without human intervention – has demonstrated that some data, such as the premium payment date – previously thought of by many as important only for administrative purposes – can be significant predictors of lapsation risk.

Lower and middle-income customers who pay their premiums shortly after they receive their paycheck when they have sufficient funds in their checking account are more likely to keep their policies in force. Those customers whose premium due dates fall a long time after they receive their paychecks – by which time they may have spent their most recent paycheck – are more likely to lapse.

Armed with this knowledge and other discoveries generated by predictive analytics, insurers and producers can know which policyholders to call and when as well as why these customers are at high risk.

We produce more refined policyholder segments that have been newly identified and are using more data and extended study periods to set credible lapse rate assumptions with lower variability.

The lapse assumptions are more accurate than those produced previously, and financial models and results will have lower variability. It also provides a launchpad for reducing lapses in the future. Whether this support has strong incremental effects or exponential strategic advantages depends on the insurer’s implementation.

Using automation to gain exponential strategic advantages 

Automating predictive analytics would enable quick analysis of additional potentially predictive factors that arise as well as real-time information on the impact of behavioral, economic, market and other environmental changes. The insurer can then be proactive in improving policyholder retention and understanding its emerging lapse experience, providing exponential strategic advantages.

Returning to the sales process, let us think about how much valuable information we collect that we do not use. For example, when a policyholder notifies us of a change in address, do we treat it purely as an administrative matter, or do we analyze it to see whether the move suggests changed economic or family circumstances and hence an increased need for coverage?

Given that many people simply do not buy what a needs analysis says they should buy, perhaps we can start by letting people know how much coverage others in similar circumstances have. This may not solve the entire gap in life insurance coverage, but it is a message that resonates with customers (as Amazon has demonstrated), and it would be a door opener for us to talk to customers and prospects about their needs.

We, in the life and annuity spaces, have built our businesses by collecting and effectively analyzing huge volumes of data. Let us continue to innovate and use the new tools now available to us to revitalize  – and indeed revolutionize  – our businesses!

Next Gen Personalization for Life Insurance

In this world of instant communication, consumers have come to expect personalized user experiences from their service providers. Recognizing this, most industries can now offer their customers products that match their immediate and long-term needs, wrapped in tailored messaging that speaks their language and caters to their lifestyle, behavior, attitude, and preferences.

This is the basis for the data-driven, ‘People Like You’ marketing strategy commonly used in B2C campaigns.

There is so much untapped potential for personalization in the life insurance world. Most life insurers currently use traditional segmentation tools such as Tapestry, Mosaic, and even Facebook, as the basis for personalizing their marketing activities in the Life Insurance Next Gen Personalization Methodology vertical.

In short, segmentation programs classify people into over 60 groups and types based mainly on zip code data, creating unique lifestyle segments relating to demographics and socioeconomic characteristics. Tapestry, for example, describes US neighborhoods in easy-to-visualize terms, ranging from “American Royalty” to “Heartland Communities.”

But what if you could add a totally new dimension to traditional classification? Tracking recurring behavioral patterns can create hundreds of thousands of additional granular segmentations, providing a full and complete insight into your Book of Business!

Atidot leverages these segments with its machine learning capabilities. We use external information from public databases as well as internal sources such as CRM systems.

Additionally, you can create a platform for new marketing strategies that are more accurate, enabling marketing campaigns based on real-time customer data. Occupation, the proximity of hospitals, the day of premium payment, investment patterns, and more can impact the Lifetime Value of your policyholders and can help create additional revenue sources.

This is the next-generation platform for product personalization and tailored marketing campaigns, new risk modeling, lapse strategy, and more. Moreover, machine learning technology can keep learning as policyholders’ actions are recorded to create more accurate and additional profiles, thus detecting the most profitable potential customers.

Real-time data is the basis of our Nano-Segmentation Methodology. One example of a behavioral pattern is the client’s payment date. Different dates have different meanings: if someone is repeatedly late in their premium payments, the machine can identify a pattern such as “these people, if their age is between 40-50, tend to lapse within 5 years”.

This behavioral pattern could have a totally different meaning if they are between the ages of 20-30. In that case, it might suggest that they are busy, successful people who don’t have enough time to attend to their finances.

Another example of a Behavioral Pattern relates to sensitivity to financial market trends. For instance, when the Bond Index is on the rise, some people tend to invest more in their own pension funds.

Atidot’s technology ties such behavior patterns into different groups, such as the “Trendsetters” segment that tends to invest when the Bond Index is up, indicating that they have a financial orientation and can be treated in two different ways. For instance, if the client owns a policy with a 4% guaranteed premium that was issued years ago, they should be encouraged not to lapse that policy.

On a strategic level, if you are catering to this specific group, you might launch a marketing campaign in channels that cater to this specific group of trendsetters with a financial orientation, for instance, via bloggers associated with style, targeted ads in fashion publications, direct email, etc. The possibilities are endless.

So, since AI and ML capabilities can be trained to translate real-time events into real-time data, this newfound segmentation becomes the platform for tailored marketing targeting that can factor in any real-time relevant data. For instance, changes in global food and oil security can impact the demand for life insurance, generating real-time targeting on a weekly or monthly basis through traditional channels such as email campaigns, social networks, etc.

This next-generation approach will set the basis for marching your company into the challenges of the next era.

Life Insurance Providers are Missing Opportunities Presented by Raw and Unstructured Data

“Water, water, everywhere, nor any drop to drink;” Replace Coleridge’s memorable line with “data” and you have an accurate surmisal of the life insurance industry over the last three decades. Dating back to 1700’s London and even, some suggest, ancient Rome, life insurance companies accrue vast sums of data on policyholders’ health, family circumstances, living arrangements, employment, and beneficiaries. The sector might be considered the original “Big Data” business but, until now, unlocking the full potential of that data was a task befitting Hercules.

Traditionally, life insurance companies store their data in different formats and in different systems, none of them compatible, none of them talking to each other. But the old legacy systems of green and orange screens do not provide a means of strategically using data and insurers are subsequently losing out in an increasingly competitive market.

Advances in artificial intelligence, machine learning, and predictive analytics have opened a new world of opportunity for life insurance companies. Digitalization in the 1990’s created an explosion of available data, but for a long time, this surge was not matched by corresponding technological developments that allowed the data to be processed, manipulated, and transformed into actionable insights
One of the greatest challenges currently facing actuaries in the life insurance industry is that while senior management is eager to put the abundance of data to practical gain—to develop a better understanding of policyholders and apply this in marketing, pricing and reserving—actuaries are often not equipped with the know-how to apply the latest technologies to their own data.

Many companies who work with brokers struggle to form a full picture of their clients. Even those who work directly with clients are unable to forecast what they will do and often fail to accurately predict future behavior because companies are unable to unlock the data’s hidden value with the range of tools available to them.

Life insurers are realizing that outmoded ways of doing business are not only sub-optimal but may even no longer be viable. For decades, the sector was slow to adapt to new technologies that other industries were responding to, and entrenched IT departments coupled with insufficient pressure to adapt were the enemies of innovation. Today, insurers are working in a very different climate. In a 2016 PwC survey, some three-quarters of insurance companies acknowledged that their business was going to be affected by technology disruption and feared that their traditional operations might lose to new contenders. A similar percentage of insurers surveyed in 2015 said that they expected to use big data in pricing, underwriting, and risk selection within two years. Competition is greater, premiums are lower and industry disruptors such as Lemonade are trying to upend the industry from a distribution channel point of view.

To address this, many insurers are looking to data scientists to extract value from their data. But most data teams at insurtech companies expect to receive normalized data from the insurers to create a structured data set. Not only is this frequently beyond the reach of smaller companies who don’t count data scientists among their staff – and therefore cannot devote the necessary time and resources to unpack the data, but even among those firms big enough to have their own innovation departments, the time
—and money— required to cleanse and normalize the data can be burdensome.
Some companies are changing this reality by harnessing new data analysis tools to enable life insurance companies to monetize their in-force data and customer base. These platforms allow insurers to assess the potential for under-insurance, high lapse risk and profitability to improve up-sale and cross-sale efforts, as well as optimizing distribution channels to develop proactive retention programs. By taking advantage of artificial intelligence, machine learning, and predictive analytics, these systems augment the data, both internal and public, held by life insurance providers, grouping cohorts of policyholders together for meaningful analysis, to find the embedded value in a book of business.
Despite the potential value, the life insurance market, which has not yet made the technological jumps that have revolutionized other sectors such as banking and finance, has been slow to appreciate the value of raw and unstructured data.

Structuring unstructured data is a big headache for insurers, yet it is also a necessity if companies only have standard actuarial techniques at their disposal. Using advanced methods of artificial intelligence and machine-learning, data bypasses these first steps of insurer manipulation, allowing the modeling process to start straight away—a process that is immediately quicker and more efficient.
Typically, after data has been normalized, unstructured data is left out of the final analysis, wiping vast quantities of relevant information. In many cases, the lack of data is itself indicative of a behavioral pattern. Unstructured and manipulated raw data grants insurers the freedom to utilize more features—and the more features, the more insurers can understand why individuals make the choices they do, helping them to build up a more realistic image of their behavior. From a quantitative point of view, the model is improved by ever larger data sets.

The benefits of unstructured data can be illustrated through the example of a free text box that may accompany an insurer’s request for policyholders’ work emails and occupations. When you have free text cells, swaths of data can be lost unless insurers understand how to analyze it and link it to external information sources. For example, the data can be divided into three sets: those that enter an accurate work email, those who enter an email that is incorrect and those who leave the space blank. Through advanced modeling, it can be ascertained that each group behaves in behaviorally distinct ways. From this data—and even, significantly, from the absence of data—insurers gain greater insights into the policy-holder. Those who put in incorrect emails may have lost their jobs, for example, and those who don’t enter a work email may either be unemployed or employed in a field—such as construction or cleaning—where they don’t require an email.

Similarly, if people are asked to enter their occupations manually, there will be tens of thousands of variations—teacher, math teacher, French tutor—that are not statistically significant until the techniques of machine learning are applied: different occupations can then be clustered according to different statistical groups, such as pensioners, teachers, managers, housewives, to extract potentially lucrative data.

Knowing where a policyholder is paying their premiums from, whether from an individual or company account or, for example, the Teachers Federal Credit Unit or the Navy Federal Credit Union is advantageous for the insurer.

Unlocking the structure within unstructured data is the key to further insights, which are enriched by publicly available external data. Advanced models can apply those features most relevant to insurance, for example, U.S. census questions on monthly insurance expenditure and assets in pension savings. Moreover, every time data is entered into these machine-learning models, the process is quicker than before, giving insurers greater insights in a significantly shorter space of time.
Raw data is the key to insurers staying ahead of the competition. Life insurers can continue to do what they do best—but now with the tools to irrigate their data and watch the profits bloom.

Enabling Change in Life Insurance: what does it “tech” to get the job done?

As published in InsureTechNews : https://insurtechnews.com/insights/enabling-change-in-life-insurance-what-does-it-tech-to-get-the-job-done

Written by Dustin Yoder, CEO Sureify Dror Katzav, CEO, Atidot Brent Williams, CEO Benekiva Andrei Pop, CEO, Human API | The COVID-19 pandemic has turned up the heat on insurers. Now, more than ever before, they need to provide their business units with digital tools to conduct business, keep agents selling and enable connections to customers in a socially-distanced world. The new business and social climate mandates that customers be able to shop for, buy and manage a policy without a face-to-face interaction for the future. Those carriers that have put the technology and processes in place to meet this increased business and customer expectation will have continued success even while the model of what “work” and “financial security” look like is changing.  Those who haven’t yet modernized are under added pressure to up their digital game. Looking at a cross section of insurtech software and data leaders can provide carriers with a great deal of wisdom on how digital enablement is creating a more robust, more accurate, more cost-effective life insurance model for those willing to mix old and new. The leaders cited here demonstrate how implementing across-the-board technology enhancements can ultimately produce the recipe for success, not just today, but for the long term.LifetimeAcquire
SUREIFY®

A recent Celent study shows that, in 2019, many of the most basic customer interactions related to the sale and service of life insurance could not be met digitally. More than 50% of insurers were unable to satisfy even basic customer needs (like changing a name, email address or beneficiary) without a face-to-face or fax transaction. It took a global pandemic to wake the industry up given that many needed processes were stopped in its tracks. Today, many carriers who believed time was on their side are scrambling to adapt to this new environment, which arrived overnight and shows no signs of abatement. Frenetic activity is now being undertaken to plug a gap that was apparent even before the virus hit. A permanent digital end-to-end solution is no longer a nicety – it is a necessity.

Sureify began offering the “innovative” end-to-end digital experience that is now an absolute requirement for the life insurance industry long before the virus changed the transactional environment. The company’s platform is built to be flexible and modular.

LifetimeAcquire enables omnichannel sales that drive placement rates via quoting, e-application, automated unAs published in InsureTechNews : https://insurtechnews.com/insights/enabling-change-in-life-insurance-what-does-it-tech-to-get-the-job-done

Written by Dustin Yoder, CEO Sureify Dror Katzav, CEO, Atidot Brent Williams, CEO Benekiva Andrei Pop, CEO, Human API | The COVID-19 pandemic has turned up the heat on insurers. Now, more than ever before, they need to provide their business units with digital tools to conduct business, keep agents selling and enable connections to customers in a socially-distanced world. The new business and social climate mandates that customers be able to shop for, buy and manage a policy without a face-to-face interaction for the future. Those carriers that have put the technology and processes in place to meet this increased business and customer expectation will have continued success even while the model of what “work” and “financial security” look like is changing.  Those who haven’t yet modernized are under added pressure to up their digital game. Looking at a cross section of insurtech software and data leaders can provide carriers with a great deal of wisdom on how digital enablement is creating a more robust, more accurate, more cost-effective life insurance model for those willing to mix old and new. The leaders cited here demonstrate how implementing across-the-board technology enhancements can ultimately produce the recipe for success, not just today, but for the long term.
As different insurtech partners are brought in to fill specific roles for different clients, Sureify completely customizes offerings to fit individual carriers’ needs, even in a constantly-shifting business landscape. With a full complement of like-minded groundbreakers, Sureify orchestrates better, more cost-effective products and processes. The company’s continued growth is further proof that digital transformation is no longer an option, but a necessity. “The idea that digital transformation is innovative has changed overnight, and now many of these digital capabilities are the new normal for doing business post-COVID” noted Sureify CEO, Dustin Yoder. “Life insurers who make digital transformation part of their core strategy in 2020 will be well-positioned, not just throughout this pandemic, but into the future.”

HUMAN API

Traditional methods of gathering needed information for underwriting such as in-person paramedical exams and attending physician statement requests are significantly delayed or paused for the moment as the medical field turns its attention to the coronavirus outbreak. As a result, carriers and reinsurers must find a digital method for collecting medical data to continue underwriting cases. Distribution firms are also on the lookout for new ways to assist clients remotely, and to streamline the insurance buying experience to help more consumers secure peace of mind.

Human API allows consumers to digitally connect and share health data from the comfort of their homes — with no IT integration work to get up and running. This “no-touch” approach to medical data retrieval supports business continuity for carriers, reinsurers, and distribution firms while ushering in a new era of digital transformation in insurance. Carriers and reinsurers are finding that an applicant’s electronic medical records often contain valuable information such as recent lab tests, vitals, and social history that can be used to expedite underwriting. The use of EHR data has grown exponentially, especially in recent weeks, and it shows the potential to replace the APS, in-person exams and lab work. This could pave the road to a future with automated rules engines, accelerated underwriting programs, and granular risk stratification. Stakeholders from all across the life insurance industry are fully embracing EHR data to adapt to our new normal and better serve consumers.

ATIDOT

Life insurers and annuity writers have more client data than any other industry yet analyzing policyholder behavior has always posed a challenge. The recent rise in unemployment rates, regulatory changes and the unprecedented market volatility have exacerbated that challenge. Over the course of the past six months, we have been confronted with regulatory changes, the COVID-19 pandemic, Shelter-In-PlLifetimeAcquireace orders, a mandatory 90-day premium grace period, interest-rate drops, and a two-billion-dollar relief package. Traditional capabilities and paradigms lack the flexibility to enable insurers to overcome uncertainty, limiting their ability to understand demand elasticity and analyze new market trends in real-time.

Choosing a digital, data-driven approach will empower life insurers to embrace new opportunities and overcome unforeseen risks. Life insurers need the ability to analyze parameters such as lapse, mortality, profitability (and many more) to generate accurate real time predictions that will support their strategy. The fastest, most efficient, and most accurate method to generate insights and predictions dictates the use of Artificial Intelligence and Machine Learning technologies. AI and ML can process large amounts of data from multiple internal and external data sources and then learn and produce new insights as events occur. Atidot offers such real-time data analysis based on clients’ portfolios. The solution provides a 360-degree view of policyholders and producers using insights and recommendations. It also allows strategic scenario modeling on individual policyholders, or on insurers’ overall portfolio and prediction of trends, looking at both individual consumer policyholder behavior and market changes in real time. This allows carriers to monitor, analyze and strategize to improve profitability immediately. “The pandemic has accelerated the digitization process within life carriers however, they have yet to maximize the potential within their data” says Dror Katzav, Atidot CEO.

BENEKIVA

In today’s more hectic than usual environment, brought on by COVID-19, companies are challenged to keep business continuity effortless. Carriers need to easily move their claims operations to employees working from home, with no drop in productivity or issues with claims processing. Even the best life insurance modernized by technology combined with data means nothing if it does not produce the fulfillment of a customer claim.

Benekiva’s Bene-Claims module has allowed forward-thinking clients to transform their claims processes from intake through payout. The platform automates documentation, intake process, correspondence, workflows, rules, reporting, interest calculations and more. The company’s flexible architecture allows the platform to connect with multiple carriers’ systems across an organization to offer a single claims platform regardless of product line, product riders/rules, or underlying company that the policy was written under. Carriers using Benekiva’s claims module experienced business as usual, or even better than usual. Per Steve Shaffer, Chairman, President, and CEO of Homesteaders Life Company, “With Benekiva’s ability to work anywhere, anytime, and any device, during COVID-19, it has been business as usual for our claims staff and most importantly, we have been able to uphold our superior servicing standard to our beneficiaries.”

Insurers working with Benekiva have reported the following benefits:
– 40% operational efficiency in claims processing, workflows, and payout
– Accurate rider and benefit calculations that saved a carrier $2 to $4 million a year
– Optimized interest calculations which saved a carrier over 40 hours a week
– Reduction of cycle time of 75%

As the industry goes forward, the increased sense of urgency that has resulted from COVID-19 is unlikely to diminish. That means that digital transformation will be essential to any carrier hoping to be viable in this new domain. As the method of doing business transforms to an “all digital” experience, it should be a given that 100% of insurers will be able to complete virtually all business transactions through mobile and web-based applications. Those insurers, and their start-up partners, who come to the table early will be best situated to sell more, manage risk better and operate in a cost-effective manner well into the future. Those who are in the beginning stages of offering such an experience may still be able to make up ground as they accelerate their processes to fit into today’s exclusively-remote business world. Those who haven’t yet started a digital transformation will need the best allies in insurtech to survive into the future.

Better with Age /The Actuary Magazine

As featured at the Actuary Magazine : https://theactuarymagazine.org/better-with-age/

FEATURED ARTICLES

Better With Age

Predicting mortality for post-level term insurance

MARTIN SNOW AND ADAM HABER SPRING 2020

Photo: Getty Images/Dimitri Otis

Actuaries have a long and storied history of providing the joint mathematical and business foundation for the insurance industry. Yet, advanced predictive analytics techniques with machine learning (ML) and artificial intelligence (AI) have not made it into the standard toolkit of the typical actuary. Insurers and actuaries could reap major strategic benefits if they were to significantly increase their use of these advanced predictive techniques. In this article, we focus on mortality and lapse studies as one example.

Post-level term (PLT) insurance presents a unique set of challenges when it comes to predicting mortality and lapse experience. After a set period of, say, 10 or 20 years when the policyowner paid level premiums, the premium will rise annually. Customers will be highly motivated to identify all of their other options. Healthier individuals will have good alternatives and lapse their policies; the less healthy ones will remain. The greater the premium increase, the greater this effect will be—resulting in the classic mortality spiral.

How can we get a good quantification of the interrelationship between premium increases and lapse and mortality experience? By building a predictive analytics model—more advanced than those previously developed1,2—to set lapse and mortality assumptions, and price and value PLT insurance. Our model will statistically integrate heterogeneous customer cohorts,3 improve credibility in cohorts with sparse claims data, and provide a more complete understanding of the impact of premium changes on mortality rates. We can only imagine the additional improvements to insurer pricing and financial reporting that could be achieved with broader applicability of these techniques beyond PLT.

OUR PLT MODEL

Our PLT model comprises three advanced predictive methods:

1. An innovative application of a statistical multivariate framework to model PLT lapse and mortality. This multivariate model reflects the causal structure (and almost immediate impact) of PLT lapsation and premium changes on mortality (PLT causal structure4) and provides better guidance for setting PLT premiums. Taking the causal structure into consideration is especially important when answering predictive “what if” questions (e.g., what happens to mortality if we change premiums by X percent).Consistent with our plan to model the lapse rate as a major driver of the dependence of mortality rates on premium level, we make assumptions in our model about the underlying data-generating processes:
· Whether a policyholder lapses at the end of the level term period is a stochastic function of various characteristics such as age, gender, risk class, face amount and the change in premium.
· This function may include complex dependencies among variables. For example, the effect of different face amounts on lapsation may vary by age, gender and so on.
· The differences in both base and shock lapse among cohorts cause perceptible differences in mortality levels.

2. The statistical technique of “partial pooling” to increase the credibility of sparsely populated cohorts. This is especially important when the volume of available data (especially mortality data) differs substantially by cohort, leading to differences in credibility—including cohorts with very limited credibility.

Partial pooling is a principled middle ground between complete pooling, which fits a single model for the entire population and ignores variations, and no pooling, which fits a single model for each cohort and ignores similarities shared among cohorts. Partial pooling is also known as hierarchical partial pooling.

Partial pooling enables us to share information (borrowing strength) among cohorts, regularize6 our model and account for different cohort sizes without incorporating ad hoc solutions. The data for each observed cohort informs and adds credibility to the probability estimates for all of the other cohorts. The extreme estimates are driven toward the population mean (“shrinkage” in Bayesian statistics) with significant lessening of variability that may have been created by noise in the data. This phenomenon is closely related to the concept of bias-variance trade-off,7 in which the tightness of fit to the observed data is reduced, so the derived estimates serve as better predictors. Partial pooling leaves us with better estimates, reduced variability and improved credibility.

Partial pooling smooths mortality estimates, which by itself is not new in actuarial science—different graduation techniques have been developed and implemented over the years. The distinct advantage of partial pooling is that it achieves the same goal by explicitly sharing information among cohorts in a principled way (guided by domain knowledge and analysis of the data), and it can improve credibility in sparsely populated cohorts.

3. The integrative statistical approach of Bayesian inference 8,9 to quantify differences in experience among cohorts with different exposure levels. The generative nature10 of Bayesian modeling enables the incorporation of expert knowledge into the models in the form of model structure and informed priors.11,12 Bayesian models produce crucial uncertainty estimates (unlike the point estimates supplied by more traditional maximum likelihood approaches) needed for informed decision-making—especially with sparse mortality data. We use Bayesian multivariate modeling of lapse and mortality, but we do not include a numerical comparison of the Bayesian and non-Bayesian approaches in this article due to space considerations.

There are two key elements of our mortality-lapse model. The first is a nonlinear regression lapse model inspired by previous Society of Actuaries (SOA) studies.13,14 We added partial pooling of parameters across cohorts to increase accuracy, credibility and predictability. We changed the link function of the model from log to logit to ensure per-cohort lapsation is bounded by the exposure (previously it was possible for the model to predict more lapses than exposures, i.e., an actual-to-expected ratio > 1).

The second key element of our model is that it is a Bayesian version of the Dukes MacDonald (DM) mortality model.15,16 In this version, we model the effectiveness parameter as a nonlinear function of the cohort characteristics (e.g., age, risk class, gender, etc.), use priors that reflect actuarial knowledge regarding plausible parameter values of G (e.g., a reasonable prior might put more weight on values of G closer to 1 than 0),17 and infer the posterior distribution of G from the data (the distributions over model parameters after conditioning on the data). We use the nonlinear regression lapse model previously described to estimate a distribution of lapse rates by cohort. Mortality is estimated by integrating over two variables: the joint distribution of base/shock lapse rates and the effectiveness parameter, thereby completing the mortality-lapse model.

OUR MODEL IN ACTION

To implement the model, parameters for both the lapse and mortality models were estimated using Stan, a state-of-the-art platform for statistical modeling and high-performance statistical computation.18 We validated the estimates Stan provided with both Bayesian model comparison methods, such as leave-one-out (LOO) and Watanabe–Akaike information criterion (WAIC),19 and actual-to-expected (A/E) ratios.

The SOA data20 we used for our modeling, consisting of 8,066 different customer cohorts, is summarized in Figure 1.

Figure 1: Experience Used in the Model

Source: Society of Actuaries. Lapse and Mortality Experience of Post-Level Premium Period Term Plans. Society of Actuaries, 2009 (accessed January 27, 2020).
To quantify and validate the impact of the new Bayesian tools presented, we conducted an analysis. First, for the multivariate modeling of lapse and mortality, we examined three variants of DM mortality estimates:

1. Assume fixed base lapse rates before the PLT period, fixed total lapse rates at the end of the level term period, and fixed effectiveness parameters. Optimal values for base and total lapse rates and the effectiveness parameter were found by using a standard gradient descent optimization algorithm. The lapse and effectiveness parameters do not vary by cohort though the select and point-in-scale mortality do vary by cohort.

2. Empirically assess from the data both the base and total lapse rates by cohort. The effectiveness parameter was fixed. It was optimized using grid search.21

3. Use a partially pooled model to estimate both base and total lapse rates that vary by cohort.

The distribution of the effectiveness parameter was inferred from the data itself using NUTS,22 an adaptive extension of the Hamiltonian Monte Carlo Markov Chain algorithm.23 In each of these variants, expected mortality is computed based on the five input parameters to DM: effectiveness, base lapsation, shock lapsation, select mortality and point-in-scale mortality. The select and point-in-scale mortality used in the computation of expected mortality were selected from standard tables. We compared the actual deaths for each method in each cohort to the expected, and we then computed a weighted error as the mean absolute deviation of the predicted A/E ratio from an A/E ratio of 1, weighted by exposure. Figure 2 shows the results.24

Figure 2: Mean Absolute Deviation of Actual/Expected Ratios

A model such as this can be continually improved. For example, we know mortality is often a bit higher for lower socioeconomic classes. Building in this knowledge may result in an A/E ratio closer to 1. Similarly, upper-income policyholders may have the ability to anti-select, which also could be built into the next model iteration. The Bayesian framework used is especially well-suited to the incorporation of this type of expert knowledge.

For partial pooling when measuring mortality rates, we fit a nonlinear regression model to publicly available mortality data25 with and without partial pooling of model parameters and held all else (e.g., the data and the characteristics being analyzed) constant. We compared the partially pooled model to both regularized and nonregularized nonlinear regression models using R’s glmnet package.

We ran the models with different characteristic subsets to validate that our results are not characteristic-dependent. Almost always, the models without partial pooling of parameters yielded implausible estimates for cohorts with especially low exposures or claims, sometimes deviating from the population mean by more than four orders of magnitude. On the other hand, the mortality rates in the partially pooled model were much closer to the population mean on an exposure-controlled basis. Outlier behavior of the magnitude seen when partial pooling was not used was not observed.

When comparing models using Bayesian selection methods,26 the partially pooled model had significantly better LOO cross validation and WAIC scores, as shown in Figure 3.27

Figure 3: Model Validation Comparison

*For this row, we show values for the regularized (nonpartial pooling) model that gives the best results.

When predicting mortality rates for cohorts with relatively small exposures (~5 percent of the mean per-cohort exposure, 153 cohorts out of 8,000), the nonpooled models yielded mortality estimates that are less than 0.01 percent of the mean mortality rate (interestingly enough, over-estimation was not observed). This under-estimation resulted from improper handling of small sample sizes. These results held even with the regularized models, which are very similar to models with graduation.28
On the other hand, models with partial pooling did not produce such extreme estimates because of the beneficial impacts of shrinkage. Proper handling of mortality estimates in cohorts with small exposures is critical, as such cohorts will almost certainly exist when modeling data at high granularity.

CONCLUSION

This article explored innovative approaches to modeling PLT lapse and mortality. A multivariate PLT lapse and mortality model improves mortality estimates and sheds new light on the interactions among changes in premium, persistency and mortality. Because management would have the information it needs in real time, this transforms pricing, reserving and “what if” analysis.

Partial pooling shares information among cohorts, accounts for different cohort sizes, regularizes estimates and improves credibility. When there are multidimensional cohorts with sparse data, partial pooling can provide unique insights into policyholder behavior, which is very valuable for insurers looking to manage risks and finances and optimize top-line growth.

The Bayesian model allows us to capture our prior knowledge of the data-generating process, such as the reasonable values of the effectiveness parameter. Such a model will be practical and implementable—and not just a nice theoretical toy.

The methods discussed in this article are valuable for answering a widedent, chief delivery officer and chief actuary at Atidot.

Adam Haber is a data scientist at Atidot in Tel Aviv. range of sophisticated actuarial questions. Actuaries and insurers will want to consider how advanced methodologies such as the innovative lapse-mortality model, causal inference and Bayesian decision theory could be used to address crucial challenges. Now that the availability of computational resources facilitates the implementation of these advanced methodologies, insurers face a new imperative. These techniques can be extended to general lapse and mortality studies along with other aspects of the insurer experience. We look forward to seeing the improvements in pricing and reserving (such as for principles-based reserving) and the increases in credibility that will emerge from greater use of these techniques.

Martin Snow, FSA, MAAA, is vice president, chief delivery officer and chief actuary at Atidot.

Adam Haber is a data scientist at Atidot in Tel Aviv.

Need for a Dedicated Coding Language

Why Actuaries Need a Unified, Dedicated Programming Language
By Barak Bercovitz, Atidot Co-Founder & CTO

Insurance is rooted in data innovation. Wide swaths of modern statistics and probability were first devised to accurately price, predict and manage risk. But insurance’s pioneering position has faltered in recent years.

While today’s economy is ablaze with revolutionary advancements in big data and computation, the insurance industry has been uneven in its adoption and application of cutting-edge data technologies. One study found that just 20 percent of the data collected by insurance companies is usable for strategic analysis. Current attempts to incorporate big data and machine learning into insurance products tend to occur on an in-house and ad-hoc basis.

High financial stakes and strict regulations already complicate big data adoption, but beyond that, the lack of a formalized system or computer language for interfacing with the available tools, technologies and data can prove one of the biggest obstacles to progress. This is why the life insurance industry as a whole, and actuaries, in particular, are in dire need of their own unified, dedicated programming language. As the CTO of a startup working with big life insurance companies, my team recognized this pressing need and committed ourselves to authoring an insurance and actuary focused programming language to help fill the gap.

To understand the distinct challenges of applying technical innovations to the insurance industry, it is essential to first peel back the complex layers behind computer applications in general. Computers have come a long way since their earliest days as room-sized mainframes with punch-card readouts. But at their core, all modern computers still reflect this legacy, hard-wired roots. Graphical interfaces and polished applications might make today’s computers more user-friendly, but every action and instruction must still be translated and abstracted into binary machine code in order to be computed on.
Now, this is not to say that developers sit typing their code as zeroes and ones. Rather, modern programming languages use their own, distinct shorthand, which is then compiled into code readable by hardware. However, the particular output logic required varies by computer architecture. GPUs operate differently than CPUs, which operate differently than cloud computing frameworks. Therefore, the trend has been to author general purpose languages (GPL) that focus on accommodating the widest range of uses to a particular machine or architecture. Instead of optimizing for a specific problem or use-case, GPLs ask the programmer to learn a new language and apply it to their given domain.
While this complicates developing specialty applications of any kind, the unique contours of the life insurance industry add an additional layer of difficulty. Regulations governing insurance are among the strictest and most byzantine of any industry. And beyond the issues of compliance come the extraordinary financial and social stakes riding on the integrity of insurance products. Core pillars of the private and public sector are propped up by the accurate, reliable management of risk. Insurance models running on shaky code could turn a tiny software bug into tens of millions in losses, the eventuality of which is only amplified by the enormous complexity of accurately calculating risk five, 10 or even 25 years into the future.

Seeing these issues firsthand inspired development of the Atidot LIA (Language for Insurance and Actuaries). What my team and I realized when approaching this challenge was that what initially looked like one problem was actually three distinct but interrelated issues.

The first issue was the substantial technical demands of carrying out the tasks actuaries would demand of big data. Cleaning and anonymizing raw data, modeling it properly, testing and executing on a laptop or workstation and ensuring all code passed formal verification – these intricate operations would be a baseline requirement of any function.

After addressing the fundamental complexity of insurance operations, the next issue was simplifying the syntax and optimizing legibility for domain experts who might not be professional developers. By building in insurance-specific entities, data models and analytics models for several use-cases, LIA allows actuaries to speak the language of insurance instead of memorizing the arbitrary variables of Python, Visual Basic, or C++.

Lastly, the unification of all necessary functionality into a syntactically legible framework would enable frictionless integration with machine learning models and accelerate time-to-market for new actuarial products. In other words, it would allow actuaries to write, debug and deploy big data in terms they could easily understand. Harmonizing function and syntax would help resolve some of the major roadblocks facing data integration.

The current tension between the enormous promise of big data for the life insurance industry and the difficulty of developing dedicated software contribute to a compromise worse than the sum of its downsides. Today, actuaries looking to incorporate big data or machine learning are forced to cobble together homegrown solutions using a patchwork of languages and tools. Otherwise, they must rely on dedicated developers who lack the domain expertise to fluently translate actuarial needs into proper code. This disconnect creates friction and stilts progress.

However, by empowering actuaries to translate their domain expertise into instructions usable by cutting-edge technologies, a dedicated programming language will help align the existing talent in the industry with the untapped potential of data innovation. Modeling insurance is increasingly becoming a multi-disciplinary challenge, and a more precise, specialized programming language will help foster collaboration and jump-start innovation. In other words, our vision is to help big data and life insurance finally speak the same language,

Average US Life Insurance Policyholder 74% Under-Insured According to New Study by Atidot

A report published today by Atidot, an insurance technology company providing AI, big data, and predictive analytics tools to the life insurance industry, exposes the widespread problem of under-insurance in the US life insurance industry. According to the report, only 26% of the total coverage needed for life insurance is currently met, leaving 74% of unmet potential coverage. The report also found that insurance companies are missing out on an average of $785 USD in annual life insurance premium payments per person who requires insurance coverage in the U.S. resulting in a total missed potential of almost $70 billion USD in annual premiums.

“Policyholders are generally unaware that they are underinsured, and the onus must be on the insurance industry to remedy that,” said Dror Katzav, CEO of Atidot. “Life-insurance companies need to be able to utilize the troves of data at their disposal to better engage with their customers. New solutions enable insurers to harness this data efficiently and know when to contact clients to update their coverage and prevent lapsation.”

The report analyzes the levels of insurance on a state-by-state basis, uncovering the rate of under-insurance for individual states and the US as a whole. The State with the greatest percentage of under-insurance is West Virginia with an average of 85%, while Oklahoma, the least under-insured State, still recorded a staggering 51%. The findings clearly show how widespread the problem is, demonstrating an alarming disconnect between providers and policyholders.

The report reveals that companies are forfeiting enormous profit potential and placing their most valuable asset, a loyal customer base, at risk by failing to capitalize on the data they possess. The failure to strategically interact with their clients comes at a substantial cost for insurers and customers alike.

The full report can be found here: https://www.atidot.com/under-insurance-report-2018

Next Generation Insurance for Next Gen Customers

The Current State

The U.S. life insurance industry’s average annual growth over the past 10 years has been less than 2% in nominal terms and negative in real terms. Meanwhile, the average face value amount of individual life insurance policies purchased in the US has steadily increased from $110,000 to over $170,000 (McKinsey Research), indicating that life insurers are failing to reach the middle market.

According to McKinsey’s mass affluent research in 2015, only 65% percent of Americans who are married with dependents have a life or an annuity policy, while 97% own an investment account. Different research by LIMRA shows that only two-thirds of Gen Y consumers have any kind of life insurance compared with three-quarters of Gen X and Boomers. In addition, fewer Gen Y consumers own individual life insurance (34 percent) than Gen X consumers (45 percent). More than half of Baby Boomers report owning individual life insurance (52 percent).

Why is the Life Insurance industry struggling with getting those Next Gen’s on board?

Failure to adopt new technologies is a prominent factor. In 2019, some 68% of insurance agents under 40 said that the insurance industry is too slow to adapt to change. There are 310 Insurtech startups in the US alone, so why is it so slow? The answer has to do with business culture, slow financial process, and low-level digitization but is also connected to low consumer engagement.

In the 90s, Jeff Bezos said that the biggest impact on e-commerce would be to reduce the friction between an intent to buy and the time it takes for your computer to reboot and connect to the Internet. This was when booting up still took a good five minutes. Purchasing a life insurance policy, on the other hand, takes 55 days on average.

Customers today expect their service providers to provide service. From the get-go and for life. Amazon offers you products that people like you buy. Spotify learns your music preference. Similarly, life insurance companies have the opportunity to be long-term partners. Unfortunately, the existing client base, representing over 80% of the business, captures less than 20% of the managerial attention.

However, we have seen that by utilizing Amazon-like engines for providing service, targeted agent communications, and recommendations, carriers can more than double the premium received over the term of the policy.

This is where new and advanced technologies play a role in bridging the gap between the ‘old’ and the ‘new’ to increase traction with next-generation customers.

With the outbreak of the Covid-19 pandemic, things are ripe for a change. Digitization turned from an interesting trend to a necessity as companies and, more importantly, brokers and agents transitioned to working on a remote basis. Digital adoption in the insurance industry globally grew by 20% in 2020 and is expected to accelerate even further.

From the top 200 insurers that were surveyed in a Deloitte research, it was stated that 23% of their premium volume was a result of new initiatives and that they expect it to grow by 33% in the next 5 years. The number one trend is data innovation.

Next-Generation Technologies

Data is the currency of the future. The insurance companies that successfully utilize AI and Machine Learning to power their strategy and provide a customer-centric experience will prevail.

The barrier to applying new technologies to Life Insurance is not only a lack of digital data but also the low quality of the available data. The ability to produce intelligent insights via AI algorithms is totally dependent on these two factors. Therefore, enriching insurance data with qualitative external resources is of great importance.

Traditional life insurers need to become much more proactive in preparing themselves for the fierce competition they will soon face from fully digital, agile Insurtech companies offering friendly, personalized, easy-to-understand policies. NextGen customers expect no less – 88% of insurance consumers demand more personalization from insurers, but until now, most carriers haven’t implemented a reliable means of providing that personalized service.

New technologies can transform data into actionable insights, thus enabling providers to empower their agents to address the unmet challenges and optimize their books of business.

This new approach is revolutionizing the life insurance industry, and together with other advanced front-facing systems, next-generation customers can learn to expect and receive better products and services from Life Insurance providers.