https://github.com/activatedgeek/higher-distributed
Higher-order gradients in PyTorch, Parallelized
https://github.com/activatedgeek/higher-distributed
distributed-pytorch machine-learning maml-algorithm meta-learning pytorch
Last synced: 4 months ago
JSON representation
Higher-order gradients in PyTorch, Parallelized
- Host: GitHub
- URL: https://github.com/activatedgeek/higher-distributed
- Owner: activatedgeek
- License: mit
- Created: 2023-05-21T00:05:07.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-05-22T16:12:42.000Z (over 2 years ago)
- Last Synced: 2025-01-27T07:27:32.983Z (about 1 year ago)
- Topics: distributed-pytorch, machine-learning, maml-algorithm, meta-learning, pytorch
- Language: Python
- Homepage: https://sanyamkapoor.com/kb/higher-order-gradients-in-pytorch-parallelized
- Size: 7.81 KB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Higher Distributed
This is the supporting code repository for the article [Higher-order gradients in PyTorch, Parallelized](https://sanyamkapoor.com/kb/higher-order-gradients-in-pytorch-parallelized) by Sanyam Kapoor and Ramakrishna Vedantam.
## Setup
(Optional) Setup a new Python environment via conda as:
```shell
conda env create -n
```
Install CUDA-compiled PyTorch version from [here](https://pytorch.org). The codebase
has been tested with PyTorch version `1.13` on CUDA 11.8.
```shell
pip install 'torch<2' torchvision --extra-index-url https://download.pytorch.org/whl/cu118
```
Finally, in the same target environment (e.g. the one setup above), run to setup all the dependencies.
```shell
pip install -e .
```
## Run
We will use `CUDA_VISIBLE_DEVICES` environment variable to mask the number of GPUs available for use.
For instance, to use four GPUs:
```
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --multi_gpu train_toy.py
```
The default parameters should not need changing for the demo.
**NOTE**: The device IDs may need to change as per hardware availability.
## License
MIT