https://github.com/edujbarrios/moe_testing_framework_on_llms_and_vllms
A simple yet useful and scalable framework to test "Mixture of Experts" on text as well as in vision tasks
https://github.com/edujbarrios/moe_testing_framework_on_llms_and_vllms
Last synced: 7 months ago
JSON representation
A simple yet useful and scalable framework to test "Mixture of Experts" on text as well as in vision tasks
- Host: GitHub
- URL: https://github.com/edujbarrios/moe_testing_framework_on_llms_and_vllms
- Owner: edujbarrios
- License: mit
- Created: 2025-01-22T14:13:04.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-01-22T14:17:13.000Z (9 months ago)
- Last Synced: 2025-01-22T15:25:31.028Z (9 months ago)
- Language: Python
- Size: 163 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Testing MoE for LLMs and VLLMs


A comprehensive testing framework for exploring and evaluating Mixture of Experts (MoE) architectures in Large Language Models (LLMs) and Vision-Language Models (VLLMs). This project provides tools for analyzing expert behavior, routing mechanisms, and performance metrics.
## π Key Features
- **Text Processing MoE**
- Token-level expert routing
- Complexity analysis
- Real-time performance metrics
- Detailed token analysis- **Image Processing MoE**
- Region-based expert assignment
- Feature extraction
- Pattern recognition
- Visual complexity metrics- **Advanced MoE Variants**
- Switched MoE implementation
- Customizable routing strategies
- Extensible base architecture
- Performance benchmarking## π Interactive Dashboard
Monitor and analyze MoE behavior in real-time:
- Complexity metrics visualization
- Expert utilization tracking
- Performance metrics
- Pattern analysis## π Quick Start
1. **Installation**
```bash
# Clone the repository
https://github.com/edujbarrios/MoE_testing_framework_on_LLMs_and_VLLMs.git
cd MoE_testing_framework_on_LLMs_and_VLLMs# Install dependencies
pip install -e .
```2. **Basic Usage**
```python
from text_processing.text_moe import TextMoE
from image_processing.image_moe import ImageMoE# Text Processing Example
text_moe = TextMoE(num_experts=3)
result = text_moe.process("Example text for processing")
print(result)# Image Processing Example
image_moe = ImageMoE(num_experts=4)
result = image_moe.process(image_data)
print(result)
```## π Project Structure
```
.
βββ benchmarking/ # Performance benchmarking tools
βββ image_processing/ # Image MoE implementation
βββ moe_variants/ # Different MoE architectures
βββ notebooks/ # Jupyter notebooks for analysis
βββ text_processing/ # Text MoE implementation
βββ utils/ # Utility functions and tools
```## π Components
### Text Processing MoE
- Token-level complexity analysis
- Expert specialization:
- Short text specialist
- Medium text specialist
- Long text specialist### Image Processing MoE
- Region-based analysis
- Expert types:
- Dark uniform regions
- Bright uniform regions
- Edge detection
- Texture analysis### Switched MoE
- Dynamic routing based on input complexity
- Confidence scoring
- Performance metrics## π Benchmarking
The project includes comprehensive benchmarking tools:
```python
from benchmarking import run_benchmark, generate_report# Run benchmarks
results = run_benchmark(moe_variant, input_type='text')
generate_report(results, variant_name)
```## π Jupyter Notebooks
Three detailed notebooks are provided:
1. `01_basic_moe.ipynb` - Basic concepts and implementation
2. `02_text_processing.ipynb` - Text analysis and metrics
3. `03_image_processing.ipynb` - Image processing and visualization## π οΈ Development
### Requirements
- Python 3.11+
- Rich (for console interface)
- NumPy (for numerical operations)
- Matplotlib (for visualization)
- Jupyter (for notebooks)### Running Tests
```bash
python -m pytest tests/
```## π Documentation
Detailed documentation is available in the [docs](./docs) directory, including:
- Architecture overview
- API reference
- Implementation details
- Performance optimization tips## π€ Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request## π Citation
If you use this project in your research, please cite:
```bibtex
@software{moe_testing_framework,
title = {Testing MoE for LLMs and VLLMs},
year = {2025},
author = {Eduardo JosΓ© Barrios GarcΓa},
url = {https://github.com/edujbarrios/MoE_testing_framework_on_LLMs_and_VLLMs}
}
```## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π References
- [Sparse MoE Paper](https://arxiv.org/abs/2201.05596)
- [ViT Architecture](https://arxiv.org/abs/2010.11929)
- [LLM with MoE](https://arxiv.org/abs/2006.16668)## π₯ Authors
- Eduardo JosΓ© Barrios GarcΓa - *Initial work* - [GitHub](https://github.com/EduardoBarrios)
## π Acknowledgments
- Thanks to all contributors
- Inspired by recent advances in LLMs and VLLMs
- Special thanks to the research community