Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ivanbongiorni/maximal
A TensorFlow-compatible Python library that provides models and layers to implement custom Transformer neural networks. Built on TensorFlow 2.
https://github.com/ivanbongiorni/maximal
attention-is-all-you-need attention-mechanism deep-learning keras machine-learning natural-language-generation natural-language-processing natural-language-understanding neural-network nlp tensorflow tensorflow2 transformer transformers
Last synced: 14 days ago
JSON representation
A TensorFlow-compatible Python library that provides models and layers to implement custom Transformer neural networks. Built on TensorFlow 2.
- Host: GitHub
- URL: https://github.com/ivanbongiorni/maximal
- Owner: IvanBongiorni
- License: mit
- Created: 2022-09-20T21:04:51.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-10-29T14:13:10.000Z (about 1 year ago)
- Last Synced: 2024-10-10T17:14:18.556Z (27 days ago)
- Topics: attention-is-all-you-need, attention-mechanism, deep-learning, keras, machine-learning, natural-language-generation, natural-language-processing, natural-language-understanding, neural-network, nlp, tensorflow, tensorflow2, transformer, transformers
- Language: Python
- Homepage:
- Size: 396 KB
- Stars: 9
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.txt
- License: LICENSE
- Codeowners: CODEOWNERS
Awesome Lists containing this project
README
# maximal
See the [Official Documentation site](https://ivanbongiorni.github.io/maximal/)
Current version: **1.2.1**
A TensorFlow-compatible Python library that provides models and layers to implement custom Transformer neural networks.
Built on TensorFlow 2.
*Logo generated by Stable Diffusion 2.1*# Installation
Its installation is straightforward:```
pip install maximal
```# How to use it?
`maximal` is commonly called as:```
import maximal as ml
from maximal.layers import TransformerLayer, GPTLayer
```and can be used in a `tf.keras` model as any common layer.
# Documentation
An [Official Website](https://ivanbongiorni.github.io/maximal/) is now available with documentation and tutorials.PyPI link is available [here](https://pypi.org/project/maximal/1.0/.
# Elements
In `layers.py`:
- `SelfAttention`: `keras.Layer`, computes *Scaled Dot-Product Attention*.- `MultiHeadSelfAttention`: `keras.Layer`, it is a concatenation of `SelfAttention` layers, resized back to original input shape through linear transformation.
- `PositionalEmbedding`: `keras.Layer`, implements double Embedding layers used in Transformers literature, for tokens and positions. Positional encoding is learned through a `tf.keras.layers.Embedding()` layer, instead of deterministic positional encoding in the original paper.
- `ImageEmbedding`: `keras.Layer`, implements double Embedding layers used as inputs of Vision Transformers, for image patches and positions.
- `TransformerLayer`: `keras.Layer` single Transformer Encoder piece. It can be used inside any `Sequential()` model in Keras.
- `GPTLayer`: `keras.Layer` GPT block. Similar to `TransformerLayer` but with causal Attention mechanism. It can be used inside any `Sequential()` model in Keras.
Coming soon: `models.py`.
# Requirements
```
h5py
numpy
tensorflow >= 2.0
```# Author
Ivan Bongiorni. [LinkedIn](https://www.linkedin.com/in/ivan-bongiorni-b8a583164/)# License
2020 Ivan BongiorniThis repository is licensed under the MIT license. See [LICENCE.txt]() for further details.