https://github.com/kundajelab/abstention
Algorithms for abstention, calibration and domain adaptation to label shift.
https://github.com/kundajelab/abstention
abstention calibration domain-adaptation label-shift prior-probability-shift
Last synced: 6 months ago
JSON representation
Algorithms for abstention, calibration and domain adaptation to label shift.
- Host: GitHub
- URL: https://github.com/kundajelab/abstention
- Owner: kundajelab
- Created: 2017-11-07T20:46:52.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2020-11-14T06:31:38.000Z (almost 5 years ago)
- Last Synced: 2024-04-25T20:46:10.316Z (over 1 year ago)
- Topics: abstention, calibration, domain-adaptation, label-shift, prior-probability-shift
- Language: Python
- Homepage:
- Size: 7.47 MB
- Stars: 36
- Watchers: 9
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Abstention, Calibration & Label Shift
Algorithms for abstention, calibration and domain adaptation to label shift.
Associated papers:
Shrikumar A\*†, Alexandari A\*, Kundaje A†, [A Flexible and Adaptive Framework for Abstention Under Class Imbalance](https://arxiv.org/abs/1802.07024)
Alexandari A\*, Kundaje A†, Shrikumar A\*†, [Maximum Likelihood with Bias-Corrected Calibration is Hard-To-Beat at Label Shift Adaptation](https://arxiv.org/abs/1901.06852)
*co-first authors
† co-corresponding authors## Examples
See [https://github.com/blindauth/abstention_experiments](https://github.com/blindauth/abstention_experiments) and [https://github.com/blindauth/labelshiftexperiments](https://github.com/blindauth/labelshiftexperiments) for colab notebooks reproducing the experiments in the papers.
## Installation
```
pip install abstention
```## Algorithms implemented
For calibration:
- Platt Scaling
- Isotonic Regression
- Temperature Scaling
- Vector Scaling
- Bias-Corrected Temperature Scaling
- No-Bias Vector ScalingFor domain adaptation to label shift:
- Expectation Maximization (Saerens et al., 2002)
- Black-Box Shift Learning (BBSL) (Lipton et al., 2018)
- Regularized Learning under Label Shifts (RLLS) (Azizzadenesheli et al., 2019)For abstention:
- Metric-specific abstention methods described in [A Flexible and Adaptive Framework for Abstention Under Class Imbalance](https://arxiv.org/abs/1802.07024), including abstention to optimize auROC, auPRC, sensitivity at a target specificity and weighted Cohen's Kappa
- Jensen-Shannon Divergence from class priors
- Entropy in the predicted class probabilities (Wan, 1990)
- Probability of the highest-predicted class (Hendrycks \& Gimpel, 2016)
- The method of Fumera et al., 2000
- See Colab notebook experiments in [https://github.com/blindauth/abstention_experiments](https://github.com/blindauth/abstention_experiments) for details on how to use the various abstention methods.## Contact
If you have any questions, please contact:
Avanti Shrikumar: avanti [dot] shrikumar [at] gmail.com
Amr Alexandari: amr [dot] alexandari [at] gmail.com
Anshul Kundaje: akundaje [at] stanford [dot] edu