https://github.com/tekaratzas/RustGPT
An transformer based LLM. Written completely in Rust
https://github.com/tekaratzas/RustGPT
Last synced: 6 months ago
JSON representation
An transformer based LLM. Written completely in Rust
- Host: GitHub
- URL: https://github.com/tekaratzas/RustGPT
- Owner: tekaratzas
- Created: 2025-09-13T22:05:55.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-09-14T04:41:52.000Z (6 months ago)
- Last Synced: 2025-09-14T05:43:01.288Z (6 months ago)
- Language: Rust
- Size: 60.5 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - tekaratzas/RustGPT
- awesome-repositories - tekaratzas/RustGPT - An transformer based LLM. Written completely in Rust (Rust)
README
# ๐ฆ Rust LLM from Scratch
[](https://github.com/tekaratzas/RustGPT/actions/workflows/rust.yml)
https://github.com/user-attachments/assets/ec4a4100-b03a-4b3c-a7d6-806ea54ed4ed
A complete **Large Language Model implementation in pure Rust** with no external ML frameworks. Built from the ground up using only `ndarray` for matrix operations.
## ๐ What This Is
This project demonstrates how to build a transformer-based language model from scratch in Rust, including:
- **Pre-training** on factual text completion
- **Instruction tuning** for conversational AI
- **Interactive chat mode** for testing
- **Full backpropagation** with gradient clipping
- **Modular architecture** with clean separation of concerns
## โ What This Isn't
This is not a production grade LLM. It is so far away from the larger models.
This is just a toy project that demonstrates how these models work under the hood.
## ๐ Key Files to Explore
Start with these two core files to understand the implementation:
- **[`src/main.rs`](src/main.rs)** - Training pipeline, data preparation, and interactive mode
- **[`src/llm.rs`](src/llm.rs)** - Core LLM implementation with forward/backward passes and training logic
## ๐๏ธ Architecture
The model uses a **transformer-based architecture** with the following components:
```
Input Text โ Tokenization โ Embeddings โ Transformer Blocks โ Output Projection โ Predictions
```
### Project Structure
```
src/
โโโ main.rs # ๐ฏ Training pipeline and interactive mode
โโโ llm.rs # ๐ง Core LLM implementation and training logic
โโโ lib.rs # ๐ Library exports and constants
โโโ transformer.rs # ๐ Transformer block (attention + feed-forward)
โโโ self_attention.rs # ๐ Multi-head self-attention mechanism
โโโ feed_forward.rs # โก Position-wise feed-forward networks
โโโ embeddings.rs # ๐ Token embedding layer
โโโ output_projection.rs # ๐ฐ Final linear layer for vocabulary predictions
โโโ vocab.rs # ๐ Vocabulary management and tokenization
โโโ layer_norm.rs # ๐งฎ Layer normalization
โโโ adam.rs # ๐ Adam optimizer implementation
tests/
โโโ llm_test.rs # Tests for core LLM functionality
โโโ transformer_test.rs # Tests for transformer blocks
โโโ self_attention_test.rs # Tests for attention mechanisms
โโโ feed_forward_test.rs # Tests for feed-forward layers
โโโ embeddings_test.rs # Tests for embedding layers
โโโ vocab_test.rs # Tests for vocabulary handling
โโโ adam_test.rs # Tests for optimizer
โโโ output_projection_test.rs # Tests for output layer
```
## ๐งช What The Model Learns
The implementation includes two training phases:
1. **Pre-training**: Learns basic world knowledge from factual statements
- "The sun rises in the east and sets in the west"
- "Water flows downhill due to gravity"
- "Mountains are tall and rocky formations"
2. **Instruction Tuning**: Learns conversational patterns
- "User: How do mountains form? Assistant: Mountains are formed through tectonic forces..."
- Handles greetings, explanations, and follow-up questions
## ๐ Quick Start
```bash
# Clone and run
git clone https://github.com/tekaratzas/RustGPT.git
cd RustGPT
cargo run
# The model will:
# 1. Build vocabulary from training data
# 2. Pre-train on factual statements (100 epochs)
# 3. Instruction-tune on conversational data (100 epochs)
# 4. Enter interactive mode for testing
```
## ๐ฎ Interactive Mode
After training, test the model interactively:
```
Enter prompt: How do mountains form?
Model output: Mountains are formed through tectonic forces or volcanism over long geological time periods
Enter prompt: What causes rain?
Model output: Rain is caused by water vapor in clouds condensing into droplets that become too heavy to remain airborne
```
## ๐งฎ Technical Implementation
### Model Configuration
- **Vocabulary Size**: Dynamic (built from training data)
- **Embedding Dimension**: 128 (defined by `EMBEDDING_DIM` in `src/lib.rs`)
- **Hidden Dimension**: 256 (defined by `HIDDEN_DIM` in `src/lib.rs`)
- **Max Sequence Length**: 80 tokens (defined by `MAX_SEQ_LEN` in `src/lib.rs`)
- **Architecture**: 3 Transformer blocks + embeddings + output projection
### Training Details
- **Optimizer**: Adam with gradient clipping
- **Pre-training LR**: 0.0005 (100 epochs)
- **Instruction Tuning LR**: 0.0001 (100 epochs)
- **Loss Function**: Cross-entropy loss
- **Gradient Clipping**: L2 norm capped at 5.0
### Key Features
- **Custom tokenization** with punctuation handling
- **Greedy decoding** for text generation
- **Gradient clipping** for training stability
- **Modular layer system** with clean interfaces
- **Comprehensive test coverage** for all components
## ๐ง Development
```bash
# Run all tests
cargo test
# Test specific components
cargo test --test llm_test
cargo test --test transformer_test
cargo test --test self_attention_test
# Build optimized version
cargo build --release
# Run with verbose output
cargo test -- --nocapture
```
## ๐ง Learning Resources
This implementation demonstrates key ML concepts:
- **Transformer architecture** (attention, feed-forward, layer norm)
- **Backpropagation** through neural networks
- **Language model training** (pre-training + fine-tuning)
- **Tokenization** and vocabulary management
- **Gradient-based optimization** with Adam
Perfect for understanding how modern LLMs work under the hood!
## ๐ Dependencies
- `ndarray` - N-dimensional arrays for matrix operations
- `rand` + `rand_distr` - Random number generation for initialization
No PyTorch, TensorFlow, or Candle - just pure Rust and linear algebra!
## ๐ค Contributing
Contributions are welcome! This project is perfect for learning and experimentation.
### High Priority Features Needed
- **๐ช Model Persistence** - Save/load trained parameters to disk (currently all in-memory)
- **โก Performance optimizations** - SIMD, parallel training, memory efficiency
- **๐ฏ Better sampling** - Beam search, top-k/top-p, temperature scaling
- **๐ Evaluation metrics** - Perplexity, benchmarks, training visualizations
### Areas for Improvement
- **Advanced architectures** (multi-head attention, positional encoding, RoPE)
- **Training improvements** (different optimizers, learning rate schedules, regularization)
- **Data handling** (larger datasets, tokenizer improvements, streaming)
- **Model analysis** (attention visualization, gradient analysis, interpretability)
### Getting Started
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/model-persistence`
3. Make your changes and add tests
4. Run the test suite: `cargo test`
5. Submit a pull request with a clear description
### Code Style
- Follow standard Rust conventions (`cargo fmt`)
- Add comprehensive tests for new features
- Update documentation and README as needed
- Keep the "from scratch" philosophy - avoid heavy ML dependencies
### Ideas for Contributions
- ๐ **Beginner**: Model save/load, more training data, config files
- ๐ฅ **Intermediate**: Beam search, positional encodings, training checkpoints
- โก **Advanced**: Multi-head attention, layer parallelization, custom optimizations
Questions? Open an issue or start a discussion!
No PyTorch, TensorFlow, or Candle - just pure Rust and linear algebra!