https://github.com/weiaicunzai/cpc-pytorch
Faster-rcnn Pytorch implementation
https://github.com/weiaicunzai/cpc-pytorch
faster-rcnn pytorch
Last synced: 2 months ago
JSON representation
Faster-rcnn Pytorch implementation
- Host: GitHub
- URL: https://github.com/weiaicunzai/cpc-pytorch
- Owner: weiaicunzai
- License: mit
- Created: 2017-05-23T02:16:13.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2021-10-08T05:11:16.000Z (over 3 years ago)
- Last Synced: 2025-02-04T19:39:08.331Z (4 months ago)
- Topics: faster-rcnn, pytorch
- Language: Python
- Homepage:
- Size: 13.5 MB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Greedy InfoMax
We can train a neural network **without end-to-end backpropagation** and achieve competitive performance.
This repo provides the code for the experiments in our paper:
Sindy Löwe*, Peter O'Connor, Bastiaan S. Veeling* - [Putting An End to End-to-End: Gradient-Isolated Learning of Representations](https://arxiv.org/abs/1905.11786)
*equal contribution
## What is Greedy InfoMax?
We simply divide existing architectures into gradient-isolated modules and optimize the mutual information between cross-patch intermediate representations.

What we found exciting is that despite each module being trained greedily, it improves upon the representation of the previous module. This enables you to keep stacking modules until downstream performance saturates.
![]()
## How to run the code
### Dependencies
- [Python and Conda](https://www.anaconda.com/)
- Setup the conda environment `infomax` by running:```bash
bash setup_dependencies.sh
```Additionally, for the audio experiments:
- Install [torchaudio](https://github.com/pytorch/audio) in the `infomax` environment
- Download audio datasets
```bash
bash download_audio_data.sh
```### Usage
#### Vision Experiments
- To replicate the vision results from our paper, run``` bash
source activate infomax
bash vision_traineval.sh
```
This will train the Greedy InfoMax model as well as evaluate it by training a linear image classifiers on top of it
- View all possible command-line options by running
``` bash
python -m GreedyInfoMax.vision.main_vision --help
```
Some of the more important options are:
* in order to train the baseline CPC model with end-to-end backpropagation instead of the Greedy InfoMax model set:
```bash
--model_splits 1
```* If you want to save GPU memory, you can train layers sequentially, one at a time, by setting the module to be trained (0-2), e.g.
```bash
--train_module 0
```#### Audio Experiments
- To replicate the audio results from our paper, run``` bash
source activate infomax
bash audio_traineval.sh
```
This will train the Greedy InfoMax model as well as evaluate it by training two linear classifiers on top of it - one for speaker and one for phone classification.
- View all possible command-line options by running
``` bash
python -m GreedyInfoMax.audio.main_audio --help
```
Some of the more important options are:
* in order to train the baseline CPC model with end-to-end backpropagation instead of the Greedy InfoMax model set:
```bash
--model_splits 1
```* If you want to save GPU memory, you can train layers sequentially, one at a time, by setting the layer to be trained (0-5), e.g.
```bash
--train_layer 0
```
## Want to learn more about Greedy InfoMax?
Check out my [blog post](https://loewex.github.io/GreedyInfoMax.html) for an intuitive explanation of Greedy InfoMax.Additionally, you can watch my [presentation at NeurIPS 2019](https://slideslive.com/38923276). My slides for this talk are available [here](media/Presentation_GreedyInfoMax_NeurIPS.pdf).
## Cite
Please cite our paper if you use this code in your own work:
```
@inproceedings{lowe2019putting,
title={Putting an End to End-to-End: Gradient-Isolated Learning of Representations},
author={L{\"o}we, Sindy and O'Connor, Peter and Veeling, Bastiaan},
booktitle={Advances in Neural Information Processing Systems},
pages={3039--3051},
year={2019}
}
```## References
- [Representation Learning with Contrastive Predictive Coding - Oord et al.](https://arxiv.org/abs/1807.03748)