Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lucidrains/hamburger-pytorch
Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"
https://github.com/lucidrains/hamburger-pytorch
artificial-intelligence deep-learning matrix-decomposition
Last synced: 13 days ago
JSON representation
Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"
- Host: GitHub
- URL: https://github.com/lucidrains/hamburger-pytorch
- Owner: lucidrains
- License: mit
- Created: 2020-11-11T20:50:19.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2021-01-13T18:18:21.000Z (almost 4 years ago)
- Last Synced: 2024-10-23T08:52:33.977Z (22 days ago)
- Topics: artificial-intelligence, deep-learning, matrix-decomposition
- Language: Python
- Homepage:
- Size: 85.9 KB
- Stars: 98
- Watchers: 6
- Forks: 8
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## 🍔 - Pytorch
Pytorch implementation of the hamburger module from the ICLR 2021 paper Is Attention Better Than Matrix Decomposition?. Following Betteridge's law, the answer according to the paper is "No" for segmentation and GANs.
This repository will contain the NMF-MU (nonnegative matrix factorization w/ multiplicative update) module sandwiched by linear projections.
Update: I tried this, but did not get better results than just using linear attention
## Install
```bash
$ pip install hamburger-pytorch
```## Usage
```python
import torch
from hamburger_pytorch import Hamburgerhamburger = Hamburger(
dim = 512, # input dimension
n = 32 * 32, # n will be size of the sequence, in this case, height times width of the images
ratio = 8, # matrix factorization ratio, recommended to be at 8
K = 6 # number of iterations, optimal at 6 as shown in paper
)x = torch.randn(1, 512, 32, 32)
hamburger(x) + x # (1, 512, 32, 32)
```## Citations
```bibtex
@inproceedings{
anonymous2021is,
title={Is Attention Better Than Matrix Decomposition?},
author={Anonymous},
booktitle={Submitted to International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=1FvkSpWosOl},
note={under review}
}
```