Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/udbhavprasad072300/transformer-implementations

Library - Vanilla, ViT, DeiT, BERT, GPT
https://github.com/udbhavprasad072300/transformer-implementations

pytorch transformer-implementations transformers vision-transformers

Last synced: about 1 month ago
JSON representation

Library - Vanilla, ViT, DeiT, BERT, GPT

Awesome Lists containing this project

README

        

# Transformer Implementations



License


PyPi Version


PyPi Downloads


Package Status

Transformer Implementations and some examples with them

Implemented:


  • Vanilla Transformer

  • ViT - Vision Transformers

  • DeiT - Data efficient image Transformers

  • BERT - Bidirectional Encoder Representations from Transformers

  • GPT - Generative Pre-trained Transformer

## Installation

PyPi

```bash
$ pip install transformer-implementations
```

or

```bash
python setup.py build
python setup.py install
```

## Example

In notebooks directory there is a notebook on how to use each of these models for their intented use; such as image classification for Vision Transformer (ViT) and others.
Check them out!

```python
from transformer_package.models import ViT

image_size = 28 # Model Parameters
channel_size = 1
patch_size = 7
embed_size = 512
num_heads = 8
classes = 10
num_layers = 3
hidden_size = 256
dropout = 0.2

model = ViT(image_size,
channel_size,
patch_size,
embed_size,
num_heads,
classes,
num_layers,
hidden_size,
dropout=dropout).to(DEVICE)

prediction = model(image_tensor)
```

## Language Translation

from "Attention is All You Need": https://arxiv.org/pdf/1706.03762.pdf

Models trained with Implementation:

## Multi-class Image Classification with Vision Transformers (ViT)

from "An Image is Worth 16x16 words: Transformers for image recognition at scale": https://arxiv.org/pdf/2010.11929v1.pdf

Models trained with Implementation:

Note: ViT will not perform great on small datasets

## Multi-class Image Classification with Data-efficient image Transformers (DeiT)

from "Training data-efficient image transformers & distillation through attention": https://arxiv.org/pdf/2012.12877v1.pdf

Models trained with Implementation: