https://github.com/notedance/pytorch-pcgrad
PyTorch Implementation of "Gradient Surgery for Multi-Task Learning" using multiprocessing
https://github.com/notedance/pytorch-pcgrad
deep-learning deep-reinforcement-learning multi-task-learning multi-task-reinforcement-learning multi-task-rl optimizer pytorch reinforcement-learning
Last synced: 8 months ago
JSON representation
PyTorch Implementation of "Gradient Surgery for Multi-Task Learning" using multiprocessing
- Host: GitHub
- URL: https://github.com/notedance/pytorch-pcgrad
- Owner: NoteDance
- License: apache-2.0
- Created: 2025-05-17T14:51:36.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-05-17T14:53:48.000Z (9 months ago)
- Last Synced: 2025-06-10T20:54:15.339Z (8 months ago)
- Topics: deep-learning, deep-reinforcement-learning, multi-task-learning, multi-task-reinforcement-learning, multi-task-rl, optimizer, pytorch, reinforcement-learning
- Language: Python
- Homepage:
- Size: 8.79 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Pytorch-PCGrad
PyTorch Implementation of "Gradient Surgery for Multi-Task Learning" using multiprocessing
# Usage
```python
import torch
import torch.nn as nn
import torch.optim as optim
from ppcgrad import PPCGrad
# wrap your favorite optimizer
optimizer = PPCGrad(optim.Adam(net.parameters()))
losses = [...] # a list of per-task losses
assert len(losses) == num_tasks
optimizer.pc_backward(losses) # calculate the gradient can apply gradient modification
optimizer.step() # apply gradient step
```