https://github.com/csinva/mdl-complexity
MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".
https://github.com/csinva/mdl-complexity
ai artificial-intelligence bias-variance-trade bias-variance-tradeoff complexity double-descent information-theory linear-models linear-regression linear-regression-models machine-learning mdl mean-squared-error minimum-description-length model-selection ridge-regression statistics
Last synced: 29 days ago
JSON representation
MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".
- Host: GitHub
- URL: https://github.com/csinva/mdl-complexity
- Owner: csinva
- Created: 2020-05-12T20:27:39.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2023-06-12T01:19:49.000Z (almost 2 years ago)
- Last Synced: 2025-02-25T18:45:22.821Z (3 months ago)
- Topics: ai, artificial-intelligence, bias-variance-trade, bias-variance-tradeoff, complexity, double-descent, information-theory, linear-models, linear-regression, linear-regression-models, machine-learning, mdl, mean-squared-error, minimum-description-length, model-selection, ridge-regression, statistics
- Language: Jupyter Notebook
- Homepage: https://arxiv.org/abs/2006.10189
- Size: 14.4 MB
- Stars: 18
- Watchers: 5
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
Awesome Lists containing this project
README
Official code for using / reproducing MDL-COMP from the paper "Revisiting complexity and the bias-variance tradeoff" ([arXiv link](https://arxiv.org/abs/2006.10189)). This code implements the calculation of MDL Complexity given training data and explores its ability to inform generalization. MDL-COMP is a complexity measure based on the principle of minimum description length of Rissanen. It enjoys nice theoretical properties and can be used to perform model selection, showing results on par with cross-validation (and sometimes even better with limited data).
*Note: this repo is actively maintained. For any questions please file an issue.*
# Reproducing the results in the paper
- most of the results can be produced by simply running the notebooks
- the experiments with real-data are more in depth and require running `scripts/submit_real_data_jobs.py` (which is a script that calls `src/fit.py` with the appropriate hyperparameters) before running the notebook to view the analysis
## Calculating MDL-COMP
Computation of `Prac-MDL-Comp` is fairly straightforward:```python
import numpy.linalg as npl
import numpy as np
import scipy.optimizedef prac_mdl_comp(X_train, y_train, variance=1):
'''Calculate prac-mdl-comp for this dataset
'''
eigenvals, eigenvecs = npl.eig(X_train.T @ X_train)def calc_thetahat(l):
inv = npl.pinv(X_train.T @ X_train + l * np.eye(X_train.shape[1]))
return inv @ X_train.T @ y_traindef prac_mdl_comp_objective(l):
thetahat = calc_thetahat(l)
mse_norm = npl.norm(y_train - X_train @ thetahat)**2 / (2 * variance)
theta_norm = npl.norm(thetahat)**2 / (2 * variance)
eigensum = 0.5 * np.sum(np.log((eigenvals + l) / l))
return (mse_norm + theta_norm + eigensum) / y_train.sizeopt_solved = scipy.optimize.minimize(prac_mdl_comp_objective, x0=1e-10)
prac_mdl = opt_solved.fun
lambda_opt = opt_solved.x
thetahat = calc_thetahat(lambda_opt)
return {
'prac_mdl': prac_mdl,
'lambda_opt': lambda_opt,
'thetahat': thetahat
}
```# Reference
- feel free to use/share this code openly
- uses code for mdl-rs from [here](https://github.com/koheimiya/pymdlrs)
- uses fmri data from [here](https://crcns.org/data-sets/vc/vim-2)
- if you find this code useful for your research, please cite the following:
```c
@article{dwivedi2020revisiting,
title={Revisiting complexity and the bias-variance tradeoff},
author={Dwivedi, Raaz and Singh, Chandan and and Yu, Bin and Wainwright, Martin},
journal={arXiv preprint arXiv:2006.10189},
year={2020}
}
```