Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/MHaringa/insurancerating
R-package for actuarial pricing
https://github.com/MHaringa/insurancerating
actuarial actuarial-science insurance pricing r-package
Last synced: 3 months ago
JSON representation
R-package for actuarial pricing
- Host: GitHub
- URL: https://github.com/MHaringa/insurancerating
- Owner: MHaringa
- Created: 2018-09-03T07:59:58.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2024-05-22T11:50:30.000Z (6 months ago)
- Last Synced: 2024-05-22T11:52:30.585Z (6 months ago)
- Topics: actuarial, actuarial-science, insurance, pricing, r-package
- Language: R
- Homepage: https://mharinga.github.io/insurancerating
- Size: 194 MB
- Stars: 65
- Watchers: 6
- Forks: 16
- Open Issues: 1
-
Metadata Files:
- Readme: README.Rmd
- Changelog: NEWS.md
Awesome Lists containing this project
- jimsghstars - MHaringa/insurancerating - R-package for actuarial pricing (R)
README
---
output: github_document
---```{r, include = FALSE}
knitr::opts_chunk$set(
fig.path = "man/figures/"
)
```# insurancerating
[![CRAN Status](https://www.r-pkg.org/badges/version/insurancerating)](https://cran.r-project.org/package=insurancerating)
[![Downloads](https://cranlogs.r-pkg.org/badges/insurancerating?color=blue)](https://cran.rstudio.com/package=insurancerating)The goal of `insurancerating` is to give analytic techniques that can be used in insurance rating. It helps actuaries to implement GLMs within all relevant steps needed to construct a risk premium from raw data.
It provides a data driven strategy for the construction of tariff classes in P&C insurance. The goal is to bin the continuous factors such that categorical risk factors result which capture the effect of the covariate on the response in an accurate way, while being easy to use in a generalized linear model (GLM).
`insurancerating` also provides recipes on how to easily perform univariate analyses on an insurance portfolio. In addition it adds functionality to include reference categories in the levels of the coefficients in the output of a generalized linear regression analysis.
## Installation
Install insurancerating from CRAN:
```{r, eval = FALSE}
install.packages("insurancerating")
```Or the development version from GitHub:
```{r gh-installation, eval = FALSE}
# install.packages("remotes")
remotes::install_github("MHaringa/insurancerating")```
## Example 1
This is a basic example which shows the techniques provided in insurancerating.The first part shows how to fit a GAM for the variable *age_policyholder* in the MTPL dataset:
```{r example, eval = TRUE, message = FALSE, warning = FALSE}
library(insurancerating)# Claim frequency
age_policyholder_frequency <- fit_gam(data = MTPL,
nclaims = nclaims,
x = age_policyholder,
exposure = exposure)# Claim severity
age_policyholder_severity <- fit_gam(data = MTPL,
nclaims = nclaims,
x = age_policyholder,
exposure = exposure,
amount = amount,
model = "severity")
```Create plot:
```{r plotgam, eval = TRUE}
autoplot(age_policyholder_frequency, show_observations = TRUE)
```
Determine classes for the claim frequency (the points show the ratio between the observed number of claims and exposure for each age). This method is based on the work by Henckaerts et al. (2018), see `?construct_tariff_classes` for the reference.
```{r figfreq, eval = TRUE}
clusters_freq <- construct_tariff_classes(age_policyholder_frequency)
clusters_sev <- construct_tariff_classes(age_policyholder_severity)autoplot(clusters_freq, show_observations = TRUE)
```
In this example the term *exposure* is a measure of what is being insured. Here an insured vehicle is an exposure. If the vehicle is insured as of July 1 for a certain year, then during that year, this would represent an exposure of 0.5 to the insurance company.
The figure shows that younger policyholders have a higher risk profile. The fitted GAM is lower than might be expected from the observed claim frequency for policyholders of age 19. This is because there are very few young policyholders of age 19 present in the portfolio.
The GAM for the claim severity :
```{r figsev, eval = TRUE}
autoplot(age_policyholder_severity,
show_observations = TRUE,
remove_outliers = 100000)```
The second part adds the constructed tariff classes for the variable `age_policyholder` to the dataset, and sets the base level of the factor `age_policyholder` to the level with the largest exposure. In this example for claim frequency the class for ages (39,50], which contains the largest exposure.
```{r example2, eval = TRUE, message = FALSE, warning = FALSE}
library(dplyr)
dat <- MTPL |>
mutate(age_policyholder_freq_cat = clusters_freq$tariff_classes) |>
mutate(across(where(is.character), as.factor)) |>
mutate(across(where(is.factor), ~biggest_reference(., exposure)))glimpse(dat)
```
The last part is to fit a *generalized linear model*. `rating_factors()` prints the output including the reference group.
```{r example3, eval = TRUE}
model_freq1 <- glm(nclaims ~ age_policyholder_freq_cat,
offset = log(exposure),
family = "poisson",
data = dat)model_freq2 <- glm(nclaims ~ age_policyholder_freq_cat + age_policyholder,
offset = log(exposure),
family = "poisson",
data = dat)x <- rating_factors(model_freq1, model_freq2)
x```
`autoplot.riskfactor()` creates a figure. The base level of the factor `age_policyholder_freq_cat` is the group with the largest exposure and is shown first.
```{r example3a1, eval = TRUE}
autoplot(x)
```
Include `model_data` to sort the clustering in the original order. Ordering the factor `age_policyholder_freq_cat` only works when `biggest_reference()` is used to set the base level of the factor to the level with the largest exposure.
```{r example3b, eval = TRUE}
rf <- rating_factors(model_freq1, model_freq2,
model_data = dat)autoplot(rf)
```
The following graph includes the exposure as a bar graph and shows some more options:
```{r example3c, eval = TRUE}
rf <- rating_factors(model_freq1, model_freq2,
model_data = dat, exposure = exposure)autoplot(rf, linetype = TRUE)
```
Add predictions to the data set:
```{r addpredictions, eval = TRUE}
dat_pred <- dat |>
add_prediction(model_freq1, model_freq2)glimpse(dat_pred)
```
Compute indices of model performance for GLMs. The RMSE is the square root of the average of squared differences between prediction and actual observation and indicates the absolute fit of the model to the data. It can be interpreted as the standard deviation of the unexplained variance, and is in the same units as the response variable.
```{r modelperf, eval = TRUE}
model_performance(model_freq1, model_freq2)
```
To test the stability of the predictive ability of the fitted model it might be helpful to determine the variation in the computed RMSE. The variation is calculated by computing the root mean squared errors from \code{n} generated bootstrap replicates.
For claim severity models it might be helpful to test the variation in the RMSE in case the portfolio contains large claim sizes. The figure below shows that the variation in the RMSE of the frequency model is quite low (as expected). The dashed line shows the RMSE of the fitted (original) model, the other lines represent the 95\% confidence interval.
```{r bootstraprmse, eval = TRUE}
bootstrap_rmse(model_freq1, dat, n = 100, show_progress = FALSE) |>
autoplot()```
Check Poisson GLM for overdispersion. A dispersion ratio larger than one indicates overdispersion, this occurs when the observed variance is higher than the variance of the theoretical model. If the dispersion ratio is close to one, a Poisson model fits well to the data. A *p*-value < .05 indicates overdispersion. Overdispersion > 2 probably means there is a larger problem with the data: check (again) for outliers.
```{r overdispersion, eval = TRUE}
check_overdispersion(model_freq1)
```
Misspecifications in GLMs cannot reliably be diagnosed with standard residual plots, and GLMs are thus often not as thoroughly checked as LMs. One reason why GLMs residuals are harder to interpret is that the expected distribution of the data changes with the fitted values. As a result, standard residual plots, when interpreted in the same way as for linear models, seem to show all kind of problems, such as non-normality, heteroscedasticity, even if the model is correctly specified. `check_residuals()` aims at solving these problems by creating readily interpretable residuals for GLMs that are standardized to values between 0 and 1, and that can be interpreted as intuitively as residuals for the linear model. This is achieved by a simulation-based approach, similar to the Bayesian p-value or the parametric bootstrap, that transforms the residuals to a standardized scale. This explanation is adopted from the [vignette for DHARMa](https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html).
Detect deviations from the expected distribution, and produce a uniform quantile-quantile plot. The simulated residuals in the QQ plot below show no clear deviation from a Poisson distribution. Note that formal tests almost always yield significant results for the distribution of residuals and visual inspections (e.g. Q-Q plots) are preferable.
```{r normalitysim, message = FALSE, eval = TRUE}
check_residuals(model_freq1, n_simulations = 600) |>
autoplot()```
It might happen that in the fitted model for a data point all simulations have the same value (e.g. zero), this returns the error message *Error in approxfun: need at least two non-NA values to interpolate*. If that is the case, it could help to increase the number of simulations.
## Example 2
This is a basic example which shows how to easily perform an univariate analysis on a MTPL portfolio using `insurancerating`.
An univariate analysis consists in the evaluation of overall claim frequency, severity and risk premium. Its main purpose lies in verifying the experience data reasonableness using previous experience comparison and professional judgement.
`univariate()` shows the basic risk indicators split by the levels of the discrete risk factor:
```{r example4}
library(insurancerating)
univariate(MTPL2,
x = area, # discrete risk factor
nclaims = nclaims, # number of claims
exposure = exposure,
premium = premium,
severity = amount) # loss```
The following indicators are calculated:
1. frequency (i.e. frequency = number of claims / exposure)
2. average_severity (i.e. average severity = severity / number of claims)
3. risk_premium (i.e. risk premium = severity / exposure = frequency x average severity)
4. loss_ratio (i.e. loss ratio = severity / premium)
5. average_premium (i.e. average premium = premium / exposure)Here the term *exposure* is a measure of what is being insured. For example, an insured vehicle is an exposure. If the vehicle is insured as of July 1 for a certain year, then during that year, this would represent an exposure of 0.5 to the insurance company. The term risk premium is used here as an equivalent of pure premium and burning cost.
`univariate()` ignores missing input arguments, for instance only the claim frequency is calculated when `premium` and `severity` are unknown:
```{r example 5}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure)
```
However, the above table is small and easy to understand, the same information might be presented more effectively with a graph, as shown below.
```{r example6, eval = TRUE, message = FALSE}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure) |>
autoplot()```
In `autoplot.univariate()`, `show_plots` defines the plots to show and also the order of the plots. The following plots are available:
1. frequency
2. average_severity
3. risk_premium
4. loss_ratio
5. average_premium
6. exposure
7. severity
8. nclaims
9. premiumFor example, to show the exposure and claim frequency plots:
```{r example7}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure) |>
autoplot(show_plots = c(6,1))```
To check whether claim frequency is consistent over the years:
```{r}
MTPL2 |>
mutate(year = sample(2016:2019, nrow(MTPL2), replace = TRUE)) |>
univariate(x = area, nclaims = nclaims, exposure = exposure, by = year) |>
autoplot(show_plots = 1)```
To remove the bars from the plot with the line graph, add `background = FALSE`:
```{r example8}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure) |>
autoplot(show_plots = c(6,1), background = FALSE)```
`sort` orders the levels of the risk factor into descending order by exposure:
```{r example9}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure) |>
autoplot(show_plots = c(6,1), background = FALSE, sort = TRUE)```
`sort_manual` in `autoplot.univariate()` can be used to sort the levels of the discrete risk factor into your own ordering. This makes sense when the levels of the risk factor has a natural order, or when not all levels of the risk factor are desired in the output.
```{r example10, eval = TRUE}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure) |>
autoplot(show_plots = c(6,1), background = FALSE,
sort_manual = c("2", "3", "1", "0"))```
The following graph shows some more options:
```{r example11, fig.width = 10, fig.height = 5}
univariate(MTPL2, x = area, nclaims = nclaims, exposure = exposure) |>
autoplot(show_plots = c(6,1), background = FALSE, sort = TRUE, ncol = 2,
color_bg = "dodgerblue", color = "blue",
custom_theme = ggplot2::theme_bw())```
Or create a bar graph for the number of claims:
```{r example12, eval = TRUE, message = FALSE, warning = FALSE}
univariate(MTPL2, x = area, nclaims = nclaims) |>
autoplot(show_plots = 8, coord_flip = TRUE, sort = TRUE)```
For continuous variables a histogram can be created:
```{r example14, eval = TRUE, message = FALSE, warning = FALSE}
histbin(MTPL2, premium)
```
Two ways of displaying numerical data over a very wide range of values in a compact way are taking the logarithm of the variable, or omitting the outliers. Both do not show the original distribution, however. Another way is to create one bin for all the outliers. This yields both the original distribution, and also gives a feel for the number of outliers.
```{r example15, eval = TRUE, message = FALSE, warning = FALSE}
histbin(MTPL2, premium, right = 110)
```
## Example 3
This is a basic example which shows how to easily perform model refinement using `insurancerating`. `insurancerating` can be used to impose either smoothing to the parameter estimates or to add restrictions to the parameter estimates. These methods are deduced from the article *Third Party Motor Liability Ratemaking with R*, by Spedicato, G. (2012).
Fit (again) a Poisson GLM and a Gamma GLM, and combine them to determine premiums:
```{r example16, eval = TRUE, message = FALSE, warning = FALSE}
mod_freq <- glm(nclaims ~ zip + age_policyholder_freq_cat,
offset = log(exposure),
family = "poisson",
data = dat)mod_sev <- glm(amount ~ bm + zip,
weights = nclaims,
family = Gamma(link = "log"),
data = dat %>% filter(amount > 0))MTPL_premium <- dat |>
add_prediction(mod_freq, mod_sev) |>
mutate(premium = pred_nclaims_mod_freq * pred_amount_mod_sev)```
Fit a burning model without restrictions. Even though restrictions could be applied to frequency and severity models, it is more appropriate to add restrictions (and smoothing) to the risk premium model.
```{r example17, eval = TRUE, message = FALSE, warning = FALSE}
burn_unrestricted <- glm(premium ~ zip + bm + age_policyholder_freq_cat,
weights = exposure,
family = Gamma(link = "log"),
data = MTPL_premium)```
Smoothing can be used to reduce the tolerance for rate change. In `smooth_coef()`, `x_cut` is the name of the risk factor with clusters, `x_org` is the name of the original risk factor, `degree` is the order of the polynomial, and `breaks` is a numerical vector with new clusters for `x_org`. The smoothed estimates are added as an offset term to the model. An offset is just a fixed term added to the linear predictor, therefore if there is already an offset in the model, the offset terms are added together first (i.e. $\text{offset} = \log(a) + \log(b) = \log(a \cdot b)$).
```{r example18, eval = TRUE, message = FALSE, warning = FALSE}
burn_unrestricted |>
smooth_coef(x_cut = "age_policyholder_freq_cat",
x_org = "age_policyholder",
breaks = seq(18, 95, 5)) |>
print()```
`autoplot()` creates a figure for the smoothed estimates. The blue segments show the estimates from the unrestricted model. The red segments are the new estimates based on the polynomial.
```{r example18a, eval = TRUE, message = FALSE, warning = FALSE}
burn_unrestricted |>
smooth_coef(x_cut = "age_policyholder_freq_cat",
x_org = "age_policyholder",
breaks = seq(18, 95, 5)) |>
autoplot()```
`degree` can be used to change the order of the polynomial:
```{r example19, eval = TRUE, message = FALSE, warning = FALSE}
burn_unrestricted |>
smooth_coef(x_cut = "age_policyholder_freq_cat",
x_org = "age_policyholder",
degree = 1,
breaks = seq(18, 95, 5)) |>
autoplot()```
`smooth_coef()` must always be followed by `update_glm()` to refit the GLM.
```{r example20, eval = TRUE, message = FALSE, warning = FALSE}
burn_restricted <- burn_unrestricted |>
smooth_coef(x_cut = "age_policyholder_freq_cat",
x_org = "age_policyholder",
breaks = seq(18, 95, 5)) |>
update_glm()# Show rating factors
rating_factors(burn_restricted)```
Most insurers have some form of a Bonus-Malus System in vehicle third party liability insurance. `restrict_coef()` can be used to impose such restrictions. `restrictions` must be a data.frame with in the first column the name of the column for which the restrictions should be applied and in the second column the restricted coefficients. The following example shows restrictions on the risk factor for region `zip`:
```{r example21, eval = TRUE, message = FALSE, warning = FALSE}
zip_df <- data.frame(zip = c(0,1,2,3),
zip_restricted = c(0.8, 0.9, 1, 1.2))burn_unrestricted |>
restrict_coef(restrictions = zip_df) |>
print()```
To adjust the glm, `restict_coef()` must always be followed by `update_glm()`:
```{r example22, eval = TRUE, message = FALSE, warning = FALSE}
burn_restricted2 <- burn_unrestricted |>
restrict_coef(restrictions = zip_df) |>
update_glm()rating_factors(burn_restricted2)
```
`autoplot()` compares the restricted and the unrestricted estimates:
```{r example23, eval = TRUE, message = FALSE, warning = FALSE}
burn_unrestricted |>
restrict_coef(restrictions = zip_df) |>
autoplot()```
`burn_restricted3` combines `restrict_coef()` and `smooth_coef()`:
```{r example24, eval = TRUE, message = FALSE, warning = FALSE}
burn_restricted3 <- burn_unrestricted |>
restrict_coef(restrictions = zip_df) |>
smooth_coef(x_cut = "age_policyholder_freq_cat",
x_org = "age_policyholder",
breaks = seq(18, 95, 5)) |>
update_glm()# Show rating factors
rating_factors(burn_restricted3)```
And add refined premiums to the data:
```{r}
premiums3 <- model_data(burn_restricted3) |>
add_prediction(burn_restricted3)head(premiums3)
```
Or do the same with model points:
```{r}
premiums4 <- model_data(burn_restricted3) |>
construct_model_points() |>
add_prediction(burn_restricted3)head(premiums4)
```