Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lucidrains/perceiver-ar-pytorch
Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch
https://github.com/lucidrains/perceiver-ar-pytorch
artficial-intelligence attention-mechanism deep-learning long-context transformer
Last synced: 2 days ago
JSON representation
Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch
- Host: GitHub
- URL: https://github.com/lucidrains/perceiver-ar-pytorch
- Owner: lucidrains
- License: mit
- Created: 2022-06-18T15:58:56.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-04-10T08:57:10.000Z (over 1 year ago)
- Last Synced: 2024-12-10T10:03:49.679Z (12 days ago)
- Topics: artficial-intelligence, attention-mechanism, deep-learning, long-context, transformer
- Language: Python
- Homepage:
- Size: 34.2 MB
- Stars: 86
- Watchers: 4
- Forks: 4
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Perceiver AR - Pytorch
Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch.
I am building this out of popular demand, not because I believe in the architecture. As someone else puts it succinctly, this is equivalent to an encoder / decoder transformer architecture where the encoder has 0 layers (and the decoder cross attention is restricted to 1 layer)
However, the experimental results they provided are still worthwhile and I'll build it out so students and researchers alike can explore along this avenue.
Update: seems to be performing decently well on enwik8 with 4096 context length. maybe I was wrong to be pessimistic
## Install
```bash
$ pip install perceiver-ar-pytorch
```## Usage
```python
import torch
from perceiver_ar_pytorch import PerceiverARmodel = PerceiverAR(
num_tokens = 20000, # number of tokens
dim = 512, # model dimensions
depth = 8, # model depth
dim_head = 64, # attention head dimension
heads = 8, # attention heads
max_seq_len = 4096, # total max sequence length
cross_attn_seq_len = 3072, # the sequence length in which to attend to, but does not undergo self attention (must be less than max_seq_len)
cross_attn_dropout = 0.5, # what percentage of the prefix to dropout during training, in paper they had extensive experimentation to show up to 50% dropout helped prevent overfitting
)x = torch.randint(0, 20000, (1, 4096))
logits = model(x) # (1, 1024, 20000) - (4096 [seq len] - 3072 [perceived prefix] == 1024)
```## Test
Enwik8 at 4096
```bash
$ python train.py
```## Citations
```bibtex
@article{Hawthorne2022GeneralpurposeLA,
title = {General-purpose, long-context autoregressive modeling with Perceiver AR},
author = {Curtis Hawthorne and Andrew Jaegle and Cătălina Cangea and Sebastian Borgeaud and Charlie Nash and Mateusz Malinowski and Sander Dieleman and Oriol Vinyals and Matthew M. Botvinick and Ian Simon and Hannah R. Sheahan and Neil Zeghidour and Jean-Baptiste Alayrac and Jo{\~a}o Carreira and Jesse Engel},
journal = {ArXiv},
year = {2022},
volume = {abs/2202.07765}
}
```