Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kyegomez/lfm
An open source implementation of LFMs from Liquid AI: Liquid Foundation Models
https://github.com/kyegomez/lfm
agents ai genai liquid llms ssm swarms transformers
Last synced: 7 days ago
JSON representation
An open source implementation of LFMs from Liquid AI: Liquid Foundation Models
- Host: GitHub
- URL: https://github.com/kyegomez/lfm
- Owner: kyegomez
- License: mit
- Created: 2024-09-30T15:33:47.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-02-03T12:33:06.000Z (18 days ago)
- Last Synced: 2025-02-08T00:15:11.387Z (14 days ago)
- Topics: agents, ai, genai, liquid, llms, ssm, swarms, transformers
- Language: Python
- Homepage: https://discord.com/servers/agora-999382051935506503
- Size: 2.19 MB
- Stars: 147
- Watchers: 8
- Forks: 25
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# Liquid Foundation Models [LFMs]
[](https://discord.gg/agora-999382051935506503) [](https://www.youtube.com/@kyegomez3242) [](https://www.linkedin.com/in/kye-g-38759a207/) [](https://x.com/kyegomezb)
This is an attempt to make an open source implementation of LFMs, this is obviously not the official repository because it's closed source. I link papers below which I am using as a referrence.
[Discover more about the model from the original article](https://www.liquid.ai/liquid-foundation-models)## Installation
```bash
$ pip3 install -U lfm-torch
```## Usage
```python
import torch
from lfm_torch.model import LFModel
from loguru import logger# Instantiate and test the model
if __name__ == "__main__":
batch_size, seq_length, embedding_dim = 32, 128, 512
token_dim, channel_dim, expert_dim, adapt_dim, num_experts = (
embedding_dim,
embedding_dim,
embedding_dim,
128,
4,
)
model = LFModel(
token_dim, channel_dim, expert_dim, adapt_dim, num_experts
)input_tensor = torch.randn(
batch_size, seq_length, embedding_dim
) # 3D text tensor
output = model(input_tensor)
logger.info("Model forward pass complete.")
```## Liquid Transformer
A novel neural architecture combining Liquid Neural Networks, Transformer attention mechanisms, and Mixture of Experts (MoE) for enhanced adaptive processing and dynamic state updates. Very experimental and early! We're working on a training script [here](./liquid_transformer_train.py). It still needs an actual tokenizer like llama's tokenizer but it's getting there. If you can help with this then let me know.### Architecture Overview
```mermaid
flowchart TB
subgraph "Liquid Transformer"
Input["Input Sequence"] --> TL["Transformer Layer"]
subgraph "Transformer Layer"
direction TB
MHA["Multi-Head Attention"] --> LC["Liquid Cell"]
LC --> MOE["Mixture of Experts"]
MOE --> LN["Layer Norm + Residual"]
end
subgraph "Liquid Cell Details"
direction LR
HS["Hidden State"] --> WH["W_h Linear"]
Input2["Input"] --> WI["W_in Linear"]
WH --> Add((+))
WI --> Add
Add --> Act["Activation"]
Act --> LN2["LayerNorm"]
LN2 --> DO["Dropout"]
end
subgraph "MoE Details"
direction TB
Input3["Input"] --> Gate["Gating Network"]
Input3 --> E1["Expert 1"]
Input3 --> E2["Expert 2"]
Input3 --> E3["Expert N"]
Gate --> Comb["Weighted Combination"]
E1 --> Comb
E2 --> Comb
E3 --> Comb
end
TL --> Output["Output Sequence"]
end
``````python
import torch
from loguru import loggerfrom lfm_torch.liquid_t_moe import LiquidTransformer
# Example usage
if __name__ == "__main__":
seq_len, batch_size, embed_size = 10, 2, 64
num_heads, num_experts, expert_size, num_layers = 8, 4, 64, 6# Create the model
model = LiquidTransformer(embed_size, num_heads, num_experts, expert_size, num_layers)# Example input tensor
x = torch.randn(seq_len, batch_size, embed_size)# Forward pass
output = model(x)
logger.info(f"Model output shape: {output.shape}")
```# Citations
- All credit for the liquid transformer architecture goes to the original authors from liquid.ai
- https://arxiv.org/abs/2209.12951
-# License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.