Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/google-research/head2toe
https://github.com/google-research/head2toe
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/google-research/head2toe
- Owner: google-research
- License: apache-2.0
- Created: 2021-12-06T20:15:15.000Z (almost 3 years ago)
- Default Branch: main
- Last Pushed: 2022-12-12T12:55:05.000Z (almost 2 years ago)
- Last Synced: 2024-05-09T17:11:52.374Z (6 months ago)
- Language: Python
- Size: 539 KB
- Stars: 81
- Watchers: 6
- Forks: 12
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization
**Paper**: [goo.gle/h2t-paper](https://goo.gle/h2t-paper)
**Video**: [goo.gle/h2t-video](https://goo.gle/h2t-video)Code for reproducing our results in the Head2Toe paper.
## Setup
First clone this repo.
```bash
git clone https://github.com/google-research/head2toe.git
cd head2toe
```We need to download the pre-trained ImageNet checkpoints. If you use the code
below it will move the checkpoints under the correct folder. If you use a
different name you need to update paths in `head2toe/configs_eval/finetune.py`.
```bash
mkdir checkpoints
cd checkpoints
wget -c https://storage.googleapis.com/gresearch/head2toe/imagenetr50.tar.gz
wget -c https://storage.googleapis.com/gresearch/head2toe/imagenetvitB16.tar.gz
tar -xvf imagenetr50.tar.gz
tar -xvf imagenetvitB16.tar.gz
rm *.tar.gz
cd ../
```Let's run some tests. The following script creates a virtual environment and
installs the necessary libraries. Finally, it runs a few tests.
```bash
bash run.sh
```We need to activate the virtual environment before running an experiment. With
that, we are ready to run some trivial Caltech101 experiments.
```bash
source env/bin/activate
export PYTHONPATH=$PYTHONPATH:$PWDpython head2toe/evaluate.py \
--config=head2toe/configs_eval/finetune.py:imagenetr50 \
--config.eval_mode='test' --config.dataset='data.caltech101'
```Note that running evaluation for each task requires downloading and
preparing multiple datasets, which can take up-to a day. Please check out
https://github.com/google-research/task_adaptation for more details on
installing the datasets.## Running Head2Toe
Our results presented in Table-1 of our paper can be reproduced by running the
following command for Caltech-101 task. This takes 15-10mins on a single V100
gpu.
```bash
python head2toe/evaluate.py \
--config=head2toe/configs_eval/finetune_h2t.py:imagenetr50 \
--config.dataset='data.caltech101' \
--config.eval_mode='test' --config.learning.cached_eval=False \
--config.backbone.additional_features_target_size=8192 \
--config.learning.feature_selection.keep_fraction=0.01 \
--config.learning.feature_selection.learning_config_overwrite.group_lrp_regularizer_coef=0.00001 \
--config.learning.learning_rate=0.01 --config.learning.training_steps=5000 \
--config.learning.log_freq=1000
```
Hyper-parameters used for different tasks can be found in the appendix. Here is
the command for dSprites-Orientation task.
```bash
python head2toe/evaluate.py \
--config=head2toe/configs_eval/finetune_h2t.py:imagenetr50 \
--config.dataset='data.dsprites(predicted_attribute="label_orientation",num_classes=16)' \
--config.eval_mode='test' --config.learning.cached_eval=False \
--config.backbone.additional_features_target_size=512 \
--config.learning.feature_selection.keep_fraction=0.2 \
--config.learning.feature_selection.learning_config_overwrite.group_lrp_regularizer_coef=0.00001 \
--config.learning.learning_rate=0.01 --config.learning.training_steps=500 \
--config.learning.log_freq=1000
```## Running other baselines.
- **Regularization Baselines**: Use `finetune_h2t.py` config together with
`l1_regularizer`, `l2_regularizer` or `group_lrp_regularizer_coef` flags.
- **Linear**: Use `finetune.py` config.Set `config.learning.finetune_backbones` to true for enabling the finetuning of
the backbone for any experiment. If you like to run any other experiments or
if you have questions, feel free to create a new issue.## Citation
```
@InProceedings{evci22h2t,
title = {{H}ead2{T}oe: Utilizing Intermediate Representations for Better Transfer Learning},
author = {Evci, Utku and Dumoulin, Vincent and Larochelle, Hugo and Mozer, Michael C},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {6009--6033},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v162/evci22a/evci22a.pdf},
url = {https://proceedings.mlr.press/v162/evci22a.html},
}```
## Disclaimer
This is not an officially supported Google product.