Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/grf-labs/grf
Generalized Random Forests
https://github.com/grf-labs/grf
causal-forest causal-inference econometrics machine-learning random-forest statistics
Last synced: about 2 months ago
JSON representation
Generalized Random Forests
- Host: GitHub
- URL: https://github.com/grf-labs/grf
- Owner: grf-labs
- License: gpl-3.0
- Created: 2016-08-12T13:17:37.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2024-07-27T14:26:41.000Z (5 months ago)
- Last Synced: 2024-07-27T15:46:40.772Z (5 months ago)
- Topics: causal-forest, causal-inference, econometrics, machine-learning, random-forest, statistics
- Language: C++
- Homepage: https://grf-labs.github.io/grf/
- Size: 61.4 MB
- Stars: 945
- Watchers: 46
- Forks: 248
- Open Issues: 45
-
Metadata Files:
- Readme: README.md
- License: COPYING
Awesome Lists containing this project
- awesome-list - grf - Generalized Random Forests. (Causal Inference / Others)
README
[![CRANstatus](https://www.r-pkg.org/badges/version/grf)](https://cran.r-project.org/package=grf)
[![](https://cranlogs.r-pkg.org/badges/grand-total/grf)](https://cran.r-project.org/package=grf)
[![Build Status](https://dev.azure.com/grf-labs/grf/_apis/build/status/grf-labs.grf?branchName=master)](https://dev.azure.com/grf-labs/grf/_build/latest?definitionId=2&branchName=master)A package for forest-based statistical estimation and inference. GRF provides non-parametric methods for heterogeneous treatment effects estimation (optionally using right-censored outcomes, multiple treatment arms or outcomes, or instrumental variables), as well as least-squares regression, quantile regression, and survival regression, all with support for missing covariates.
In addition, GRF supports 'honest' estimation (where one subset of the data is used for choosing splits, and another for populating the leaves of the tree), and confidence intervals for least-squares regression and treatment effect estimation.
Some helpful links for getting started:
- The [R package documentation](https://grf-labs.github.io/grf/) contains usage examples and method reference.
- The [GRF reference](https://grf-labs.github.io/grf/REFERENCE.html) gives a detailed description of the GRF algorithm and includes troubleshooting suggestions.
- For community questions and answers around usage, see [Github issues labelled 'question'](https://github.com/grf-labs/grf/issues?q=label%3Aquestion).The repository first started as a fork of the [ranger](https://github.com/imbs-hl/ranger) repository -- we owe a great deal of thanks to the ranger authors for their useful and free package.
### Installation
The latest release of the package can be installed through CRAN:
```R
install.packages("grf")
````conda` users can install from the [conda-forge](https://anaconda.org/conda-forge/r-grf) channel:
```
conda install -c conda-forge r-grf
```The current development version can be installed from source using devtools.
```R
devtools::install_github("grf-labs/grf", subdir = "r-package/grf")
```Note that to install from source, a compiler that implements C++11 or later is required. If installing on Windows, the RTools toolchain is also required.
### Usage Examples
The following script demonstrates how to use GRF for heterogeneous treatment effect estimation. For examples
of how to use other types of forests, please consult the R [documentation](https://grf-labs.github.io/grf/reference/index.html) on the relevant methods.```R
library(grf)# Generate data.
n <- 2000
p <- 10
X <- matrix(rnorm(n * p), n, p)
X.test <- matrix(0, 101, p)
X.test[, 1] <- seq(-2, 2, length.out = 101)# Train a causal forest.
W <- rbinom(n, 1, 0.4 + 0.2 * (X[, 1] > 0))
Y <- pmax(X[, 1], 0) * W + X[, 2] + pmin(X[, 3], 0) + rnorm(n)
tau.forest <- causal_forest(X, Y, W)# Estimate treatment effects for the training data using out-of-bag prediction.
tau.hat.oob <- predict(tau.forest)
hist(tau.hat.oob$predictions)# Estimate treatment effects for the test sample.
tau.hat <- predict(tau.forest, X.test)
plot(X.test[, 1], tau.hat$predictions, ylim = range(tau.hat$predictions, 0, 2), xlab = "x", ylab = "tau", type = "l")
lines(X.test[, 1], pmax(0, X.test[, 1]), col = 2, lty = 2)# Estimate the conditional average treatment effect on the full sample (CATE).
average_treatment_effect(tau.forest, target.sample = "all")# Estimate the conditional average treatment effect on the treated sample (CATT).
average_treatment_effect(tau.forest, target.sample = "treated")# Add confidence intervals for heterogeneous treatment effects; growing more trees is now recommended.
tau.forest <- causal_forest(X, Y, W, num.trees = 4000)
tau.hat <- predict(tau.forest, X.test, estimate.variance = TRUE)
sigma.hat <- sqrt(tau.hat$variance.estimates)
plot(X.test[, 1], tau.hat$predictions, ylim = range(tau.hat$predictions + 1.96 * sigma.hat, tau.hat$predictions - 1.96 * sigma.hat, 0, 2), xlab = "x", ylab = "tau", type = "l")
lines(X.test[, 1], tau.hat$predictions + 1.96 * sigma.hat, col = 1, lty = 2)
lines(X.test[, 1], tau.hat$predictions - 1.96 * sigma.hat, col = 1, lty = 2)
lines(X.test[, 1], pmax(0, X.test[, 1]), col = 2, lty = 1)# In some examples, pre-fitting models for Y and W separately may
# be helpful (e.g., if different models use different covariates).
# In some applications, one may even want to get Y.hat and W.hat
# using a completely different method (e.g., boosting).# Generate new data.
n <- 4000
p <- 20
X <- matrix(rnorm(n * p), n, p)
TAU <- 1 / (1 + exp(-X[, 3]))
W <- rbinom(n, 1, 1 / (1 + exp(-X[, 1] - X[, 2])))
Y <- pmax(X[, 2] + X[, 3], 0) + rowMeans(X[, 4:6]) / 2 + W * TAU + rnorm(n)forest.W <- regression_forest(X, W, tune.parameters = "all")
W.hat <- predict(forest.W)$predictionsforest.Y <- regression_forest(X, Y, tune.parameters = "all")
Y.hat <- predict(forest.Y)$predictionsforest.Y.varimp <- variable_importance(forest.Y)
# Note: Forests may have a hard time when trained on very few variables
# (e.g., ncol(X) = 1, 2, or 3). We recommend not being too aggressive
# in selection.
selected.vars <- which(forest.Y.varimp / mean(forest.Y.varimp) > 0.2)tau.forest <- causal_forest(X[, selected.vars], Y, W,
W.hat = W.hat, Y.hat = Y.hat,
tune.parameters = "all")# See if a causal forest succeeded in capturing heterogeneity by plotting
# the TOC and calculating a 95% CI for the AUTOC.
train <- sample(1:n, n / 2)
train.forest <- causal_forest(X[train, ], Y[train], W[train])
eval.forest <- causal_forest(X[-train, ], Y[-train], W[-train])
rate <- rank_average_treatment_effect(eval.forest,
predict(train.forest, X[-train, ])$predictions)
plot(rate)
paste("AUTOC:", round(rate$estimate, 2), "+/", round(1.96 * rate$std.err, 2))
```### Developing
In addition to providing out-of-the-box forests for quantile regression and causal effect estimation, GRF provides a framework for creating forests tailored to new statistical tasks. If you'd like to develop using GRF, please consult the [algorithm reference](https://grf-labs.github.io/grf/REFERENCE.html) and [development guide](https://grf-labs.github.io/grf/DEVELOPING.html).
### Funding
Development of GRF is supported by the National Institutes of Health, the National Science Foundation, the Sloan Foundation, the Office of Naval Research (Grant N00014-17-1-2131) and Schmidt Futures.
### References
Susan Athey and Stefan Wager.
Estimating Treatment Effects with Causal Forests: An Application.
Observational Studies, 5, 2019.
[paper,
arxiv]Susan Athey, Julie Tibshirani and Stefan Wager.
Generalized Random Forests. Annals of Statistics, 47(2), 2019.
[paper,
arxiv]Yifan Cui, Michael R. Kosorok, Erik Sverdrup, Stefan Wager, and Ruoqing Zhu.
Estimating Heterogeneous Treatment Effects with Right-Censored Data via Causal Survival Forests.
Journal of the Royal Statistical Society: Series B, 85(2), 2023.
[paper,
arxiv]Rina Friedberg, Julie Tibshirani, Susan Athey, and Stefan Wager.
Local Linear Forests. Journal of Computational and Graphical Statistics, 30(2), 2020.
[paper,
arxiv]Imke Mayer, Erik Sverdrup, Tobias Gauss, Jean-Denis Moyer, Stefan Wager and Julie Josse.
Doubly Robust Treatment Effect Estimation with Missing Attributes.
Annals of Applied Statistics, 14(3), 2020.
[paper,
arxiv]Erik Sverdrup, Maria Petukhova, and Stefan Wager.
Estimating Treatment Effect Heterogeneity in Psychiatry: A Review and Tutorial with Causal Forests. 2024.
[arxiv]Stefan Wager.
Causal Inference: A Statistical Learning Approach. 2024.
[pdf]Stefan Wager and Susan Athey.
Estimation and Inference of Heterogeneous Treatment Effects using Random Forests.
Journal of the American Statistical Association, 113(523), 2018.
[paper,
arxiv]Steve Yadlowsky, Scott Fleming, Nigam Shah, Emma Brunskill, and Stefan Wager.
Evaluating Treatment Prioritization Rules via Rank-Weighted Average Treatment Effects. 2021.
[arxiv]