In Autumn 2020, DOAJ will be relaunching with a new website with updated functionality, improved search, and a simplified application form. More information is available on our blog. Our API is also changing.

Hide this message

Machine Learning in Least-Squares Monte Carlo Proxy Modeling of Life Insurance Companies

Risks. 2020;8(1):21 DOI 10.3390/risks8010021


Journal Homepage

Journal Title: Risks

ISSN: 2227-9091 (Online)

Publisher: MDPI AG

LCC Subject Category: Social Sciences: Finance: Insurance

Country of publisher: Switzerland

Language of fulltext: English

Full-text formats available: PDF, HTML, ePUB, XML



Anne-Sophie Krah (Department of Mathematics, TU Kaiserslautern, Erwin-Schrödinger-Straße, Geb. 48, 67653 Kaiserslautern, Germany)

Zoran Nikolić (Mathematical Institute, University Cologne, Weyertal 86-90, 50931 Cologne, Germany)

Ralf Korn (Department of Mathematics, TU Kaiserslautern, Erwin-Schrödinger-Straße, Geb. 48, 67653 Kaiserslautern, Germany)


Blind peer review

Editorial Board

Instructions for authors

Time From Submission to Publication: 11 weeks


Abstract | Full Text

Under the Solvency II regime, life insurance companies are asked to derive their solvency capital requirements from the full loss distributions over the coming year. Since the industry is currently far from being endowed with sufficient computational capacities to fully simulate these distributions, the insurers have to rely on suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. In this paper, we present and analyze various adaptive machine learning approaches that can take over the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over generalized linear model (GLM) and generalized additive model (GAM) methods to multivariate adaptive regression splines (MARS) and kernel regression routines. We justify the combinability of their regression ingredients in a theoretical discourse. Further, we illustrate the approaches in slightly disguised real-world experiments and perform comprehensive out-of-sample tests.