https://github.com/kyegomez/liqudnet
Implementation of Liquid Nets in Pytorch
https://github.com/kyegomez/liqudnet
artificial-intelligence attention-is-all-you-need attention-mechanism liquidnets machine-learning recurrent-neural-network recurrent-neural-networks
Last synced: 7 months ago
JSON representation
Implementation of Liquid Nets in Pytorch
- Host: GitHub
- URL: https://github.com/kyegomez/liqudnet
- Owner: kyegomez
- License: mit
- Created: 2023-11-06T02:49:52.000Z (almost 2 years ago)
- Default Branch: master
- Last Pushed: 2025-01-27T17:40:00.000Z (9 months ago)
- Last Synced: 2025-03-30T03:04:01.248Z (7 months ago)
- Topics: artificial-intelligence, attention-is-all-you-need, attention-mechanism, liquidnets, machine-learning, recurrent-neural-network, recurrent-neural-networks
- Language: Python
- Homepage: https://discord.gg/GYbXvDGevY
- Size: 2.18 MB
- Stars: 59
- Watchers: 2
- Forks: 9
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
[](https://discord.gg/qUtxnK2NMf)
# LiquidNet
This is a simple implementation of the Liquid net official repo translated into pytorch for simplicity. [Find the original repo here:](https://github.com/raminmh/liquid_time_constant_networks)## Install
`pip install liquidnet`## Usage
```python
import torch
from liquidnet.main import LiquidNet# Create an LiquidNet with a specified number of units
num_units = 64
ltc_cell = LiquidNet(num_units)# Generate random input data with batch size 4 and input size 32
batch_size = 4
input_size = 32
inputs = torch.randn(batch_size, input_size)# Initialize the cell state (hidden state)
initial_state = torch.zeros(batch_size, num_units)# Forward pass through the LiquidNet
outputs, final_state = ltc_cell(inputs, initial_state)# Print the shape of outputs and final_state
print("Outputs shape:", outputs.shape)
print("Final state shape:", final_state.shape)```
## `VisionLiquidNet`
- Simple model with 2 convolutions with 2 max pools, alot of room for improvement```python
import torch
from liquidnet.vision_liquidnet import VisionLiquidNet# Random Input Image
x = torch.randn(4, 3, 32, 32)# Create a VisionLiquidNet with a specified number of units
model = VisionLiquidNet(64, 10)# Forward pass through the VisionLiquidNet
print(model(x).shape)```
# Citation
```bibtex
@article{DBLP:journals/corr/abs-2006-04439,
author = {Ramin M. Hasani and
Mathias Lechner and
Alexander Amini and
Daniela Rus and
Radu Grosu},
title = {Liquid Time-constant Networks},
journal = {CoRR},
volume = {abs/2006.04439},
year = {2020},
url = {https://arxiv.org/abs/2006.04439},
eprinttype = {arXiv},
eprint = {2006.04439},
timestamp = {Fri, 12 Jun 2020 14:02:57 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2006-04439.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}```
# License
MIT# Todo:
- [ ] Implement LiquidNet for vision and train on CIFAR