An open API service indexing awesome lists of open source software.

https://github.com/transformerlab/transformerlab-app

Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
https://github.com/transformerlab/transformerlab-app

electron llama llms lora mlx rlhf transformers

Last synced: 29 days ago
JSON representation

Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.

Awesome Lists containing this project

README

          





Transformer Lab

The Operating System for AI Research Labs


Designed for ML Researchers. Local, on-prem, or in the cloud. Open source.


GitHub Stars
Release
License
Twitter


⬇️ Install for Individuals
 · 
🏢 Install for Teams
 · 
📖 Documentation
 · 
🎬 Demo
 · 
💬 Discord





Mozilla Builders



Transformer Lab Demo

---

## ✨ What is Transformer Lab?

Transformer Lab is an open-source machine learning platform that unifies the fragmented AI tooling landscape into a single, elegant interface. It is available in two editions:

### 👤 For Individuals
**Perfect for researchers and hobbyists working on a single machine.**

- **Local Privacy:** No data leaves your machine.
- **Full Toolkit:** Train, fine-tune, chat, and evaluate models.
- **Cross-Platform:** Runs natively on macOS (Apple Silicon), Linux, and Windows (WSL2).
- **No Cloud Costs:** Use your own hardware.

### 🏢 For Teams
**Built for research labs scaling across GPU clusters.**

- **Unified Orchestration:** Submit jobs to **Slurm** clusters or **SkyPilot** clouds (AWS, GCP, Azure) from one UI.
- **Collaborative:** Centralized experiment tracking, model registry, and artifact management.
- **Interactive Compute:** One-click Jupyter, VSCode, and SSH sessions on remote nodes.
- **Resilience:** Auto-recovery from checkpoints and spot instance preemption.

---

## 🛠️ Key Capabilities

🧠 Foundation Models & LLMs

- **Universal Support:** Download and run Llama 3, DeepSeek, Mistral, Qwen, Phi, and more.
- **Inference Engines:** Support for MLX, vLLM, Ollama, and HuggingFace Transformers.
- **Format Conversion:** Seamlessly convert between HuggingFace, GGUF, and MLX formats.
- **Chat Interface:** Multi-turn chat, batched querying, and function calling support.

🎓 Training & Fine-tuning

- **Unified Interface:** Train on local hardware or submit tasks to remote clusters using the same UI.
- **Methods:** Full fine-tuning, LoRA/QLoRA, RLHF (DPO, ORPO, SIMPO), and Reward Modeling.
- **Hardware Agnostic:** Optimized trainers for Apple Silicon (MLX), NVIDIA (CUDA), and AMD (ROCm).
- **Hyperparameter Sweeps:** Define parameter ranges in YAML and automatically schedule grid searches.

🎨 Diffusion & Image Generation

- **Generation:** Text-to-Image, Image-to-Image, and Inpainting using Stable Diffusion and Flux.
- **Advanced Control:** Full support for ControlNets and IP-Adapters.
- **Training:** Train custom LoRA adaptors on your own image datasets.
- **Dataset Management:** Auto-caption images using WD14 taggers.

📊 Evaluation & Analytics

- **LLM-as-a-Judge:** Use local or remote models to score outputs on bias, toxicity, and faithfulness.
- **Benchmarks:** Built-in support for EleutherAI LM Evaluation Harness (MMLU, HellaSwag, GSM8K, etc.).
- **Red Teaming:** Automated vulnerability testing for PII leakage, prompt injection, and safety.

🔌 Plugins & Extensibility

- **Plugin System:** Extend functionality with a robust Python plugin architecture.
- **Lab SDK:** Integrate your existing Python training scripts (`import lab`) to get automatic logging, progress bars, and artifact tracking.
- **CLI:** Power-user command line tool for submitting tasks and monitoring jobs without a browser.

🗣️ Audio Generation

- **Text-to-Speech:** Generate speech using Kokoro, Bark, and other state-of-the-art models.
- **Training:** Fine-tune TTS models on custom voice datasets.

---

## 📥 Quick Start

### 1. Install

```bash
curl https://lab.cloud/install.sh | bash
```

### 2. Run

```bash
cd ~/.transformerlab/src
./run.sh
```

### 3. Access

Open your browser to `http://localhost:8338`.

#### Requirements
| Platform | Requirements |
|----------|-------------|
| **macOS** | Apple Silicon (M1/M2/M3/M4) |
| **Linux** | NVIDIA or AMD GPU |
| **Windows** | NVIDIA GPU via WSL2 ([setup guide](https://lab.cloud/docs/install/windows-wsl-cuda)) |

---

## 🏢 Enterprise & Cluster Setup

Transformer Lab for Teams runs as an overlay on your existing infrastructure. It does not replace your scheduler; it acts as a modern control plane for it.

To configure Transformer Lab to talk to **Slurm** or **SkyPilot**:
1. Follow the [Teams Install Guide](https://lab.cloud/for-teams/install).
2. Configure your compute providers in the Team Settings.
3. Use the CLI (`lab`) or Web UI to queue tasks across your cluster.

---

## 👩‍💻 Development

Frontend

```bash
# Requires Node.js v22
npm install
npm start
```

Backend (API)

```bash
cd api
./install.sh # Sets up Conda env + Python deps
./run.sh # Start the API server
```

Lab SDK

```bash
pip install transformerlab
```

---

## 🤝 Contributing

We are an open-source initiative backed by builders who care about the future of AI research. We welcome contributions! Please check our [issues](https://github.com/transformerlab/transformerlab-app/issues) for open tasks.



---

## 📄 License

AGPL-3.0 · See [LICENSE](LICENSE) for details.

---

## 📚 Citation

```bibtex
@software{transformerlab,
author = {Asaria, Ali and Salomone, Tony},
title = {Transformer Lab: The Operating System for AI Research},
year = 2023,
url = {https://github.com/transformerlab/transformerlab-app}
}
```

---

## 💬 Community


Discord
Twitter
GitHub Issues


Built with ❤️ by Transformer Lab in Canada 🇨🇦