https://github.com/zib-iol/bimp
Code to reproduce the experiments of ICLR2023-paper: How I Learned to Stop Worrying and Love Retraining
https://github.com/zib-iol/bimp
deep-learning learning-rate-scheduling neural-network optimization pruning pytorch sparsity
Last synced: 11 days ago
JSON representation
Code to reproduce the experiments of ICLR2023-paper: How I Learned to Stop Worrying and Love Retraining
- Host: GitHub
- URL: https://github.com/zib-iol/bimp
- Owner: ZIB-IOL
- Created: 2022-02-21T12:43:40.000Z (about 3 years ago)
- Default Branch: master
- Last Pushed: 2023-02-23T18:04:00.000Z (about 2 years ago)
- Last Synced: 2025-03-31T13:16:56.413Z (about 2 months ago)
- Topics: deep-learning, learning-rate-scheduling, neural-network, optimization, pruning, pytorch, sparsity
- Language: Python
- Homepage:
- Size: 60.5 KB
- Stars: 8
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Citation: citation.bib
Awesome Lists containing this project
README
## [ICLR2023] How I Learned to Stop Worrying and Love Retraining
*Authors: [Max Zimmer](https://maxzimmer.org/), [Christoph Spiegel](http://www.christophspiegel.berlin/), [Sebastian Pokutta](http://www.pokutta.com/)*This repository contains the code to reproduce the experiments from the ICLR2023 paper ["How I Learned to Stop Worrying and Love Retraining"](https://arxiv.org/abs/2111.00843).
The code is based on [PyTorch 1.9](https://pytorch.org/) and the experiment-tracking platform [Weights & Biases](https://wandb.ai).
The code to reproduce semantic segmentation as well as NLP experiments will be added soon.### Structure and Usage
Experiments are started from the following file:
- [`main.py`](main.py): Starts experiments using the dictionary format of Weights & Biases.The rest of the project is structured as follows:
- [`strategies`](strategies): Contains all used sparsification methods.
- [`runners`](runners): Contains classes to control the training and collection of metrics.
- [`metrics`](metrics): Contains all metrics as well as FLOP computation methods.
- [`models`](models): Contains all model architectures used.
- [`utilities`](models): Contains useful auxiliary functions and classes.### Citation
In case you find the paper or the implementation useful for your own research, please consider citing:```
@inproceedings{zimmer2023how,
title={How I Learned to Stop Worrying and Love Retraining},
author={Max Zimmer and Christoph Spiegel and Sebastian Pokutta},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=_nF5imFKQI}
}
```