Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/archinetai/difformer-pytorch
Diffusion based transformer, in PyTorch (Experimental).
https://github.com/archinetai/difformer-pytorch
artificial-intelligence deep-learning diffusion transformer
Last synced: 8 days ago
JSON representation
Diffusion based transformer, in PyTorch (Experimental).
- Host: GitHub
- URL: https://github.com/archinetai/difformer-pytorch
- Owner: archinetai
- License: mit
- Created: 2022-09-07T19:24:22.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2022-09-13T10:01:06.000Z (about 2 years ago)
- Last Synced: 2024-10-07T04:29:10.143Z (about 1 month ago)
- Topics: artificial-intelligence, deep-learning, diffusion, transformer
- Language: Python
- Homepage:
- Size: 14.6 KB
- Stars: 25
- Watchers: 2
- Forks: 2
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Difformer - PyTorch (Experimental)
Diffusion based transformer, in PyTorch.
```bash
pip install difformer-pytorch
```
[![PyPI - Python Version](https://img.shields.io/pypi/v/difformer-pytorch?style=flat&colorA=black&colorB=black)](https://pypi.org/project/difformer-pytorch/)## Usage
### Token based
```python
from difformer_pytorch import Difformernum_tokens = 1000
difformer = Difformer(
num_tokens=num_tokens,
embedding_dim=512,
num_layers=6
)# Input tokens and mask
tokens = torch.randint(0, num_tokens, (1, 1024))
mask = torch.ones_like(x).bool()# Train difformer to demask
loss = difformer(tokens=tokens, mask=mask)
loss.backward()# Sample unmasked prediction given masked start sequence
sampled = difformer.sample(
tokens=tokens,
mask=mask,
num_steps=5
) # [1, 1024]```
### Embedding based
```py
from difformer_pytorch import Difformerdifformer = Difformer(
embedding_dim=512,
num_layers=6
)# Input embedding and mask
embedding = torch.randn(1, 1024, 512)
mask = torch.ones(1, 1024).bool()# Train difformer
loss = difformer(embedding=embedding, mask=mask)
loss.backward()# Sample prediction given masked start embedding
sampled = difformer.sample(
embedding=embedding,
mask=mask, # Optional mask to apply on embeddings
num_steps=5
) # [1, 1024, 512]
```