https://github.com/filipstrand/mflux
MLX native implementations of state-of-the-art generative image models
https://github.com/filipstrand/mflux
ai apple-silicon diffusers fibo flux huggingface ml mlx qwen qwen-image seedvr2 transformers z-image
Last synced: 25 days ago
JSON representation
MLX native implementations of state-of-the-art generative image models
- Host: GitHub
- URL: https://github.com/filipstrand/mflux
- Owner: filipstrand
- License: mit
- Created: 2024-08-10T10:30:00.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2026-03-23T13:08:20.000Z (about 1 month ago)
- Last Synced: 2026-03-24T09:24:53.625Z (about 1 month ago)
- Topics: ai, apple-silicon, diffusers, fibo, flux, huggingface, ml, mlx, qwen, qwen-image, seedvr2, transformers, z-image
- Language: Python
- Homepage:
- Size: 153 MB
- Stars: 1,921
- Watchers: 20
- Forks: 131
- Open Issues: 72
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
- Agents: AGENTS.md
Awesome Lists containing this project
- awesome-mlx - mflux - of-the-art generative image models (Vision & Multimodal)
- awesome-rainmana - filipstrand/mflux - MLX native implementations of state-of-the-art generative image models (Python)
README

[](https://pypi.org/project/mflux/)
[](https://pypi.org/project/mlx/)
[](https://github.com/filipstrand/mflux/actions/workflows/tests.yml)
### About
Run the latest state-of-the-art generative image models locally on your Mac in native MLX!
### Table of contents
- [π‘ Philosophy](#-philosophy)
- [πΏ Installation](#-installation)
- [π¨ Models](#-models)
- [β¨ Features](#-features)
- [π± Related projects](#related-projects)
- [π Acknowledgements](#-acknowledgements)
- [βοΈ License](#%EF%B8%8F-license)
---
### π‘ Philosophy
MFLUX is a line-by-line MLX port of several state-of-the-art generative image models from the [Huggingface Diffusers](https://github.com/huggingface/diffusers) and [Huggingface Transformers](https://github.com/huggingface/transformers) libraries. All models are implemented from scratch in MLX, using only tokenizers from the [Huggingface Transformers](https://github.com/huggingface/transformers) library. MFLUX is purposefully kept minimal and explicit, [@karpathy](https://gist.github.com/awni/a67d16d50f0f492d94a10418e0592bde?permalink_comment_id=5153531#gistcomment-5153531) style.
---
### πΏ Installation
If you haven't already, [install `uv`](https://github.com/astral-sh/uv?tab=readme-ov-file#installation), then run:
```sh
uv tool install --upgrade mflux
```
After installation, the following command shows all available MFLUX CLI commands:
```sh
uv tool list
```
To generate your first image using, for example, the z-image-turbo model, run
```
mflux-generate-z-image-turbo \
--prompt "A puffin standing on a cliff" \
--width 1280 \
--height 500 \
--seed 42 \
--steps 9 \
-q 8
```

The first time you run this, the model will automatically download which can take some time. See the [model section](#-models) for the different options and features, and the [common README](src/mflux/models/common/README.md) for shared CLI patterns and examples.
Python API
Create a standalone `generate.py` script with inline `uv` dependencies:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "mflux",
# ]
# ///
from mflux.models.z_image import ZImageTurbo
model = ZImageTurbo(quantize=8)
image = model.generate_image(
prompt="A puffin standing on a cliff",
seed=42,
num_inference_steps=9,
width=1280,
height=500,
)
image.save("puffin.png")
```
Run it with:
```sh
uv run generate.py
```
For more Python API inspiration, look at the [CLI entry points](src/mflux/models/z_image/cli/z_image_turbo_generate.py) for the respective models.
β οΈ Troubleshooting: hf_transfer error
If you encounter a `ValueError: Fast download using 'hf_transfer' is enabled (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not available`, you can install MFLUX with the `hf_transfer` package included:
```sh
uv tool install --upgrade mflux --with hf_transfer
```
This will enable faster model downloads from Hugging Face.
DGX / NVIDIA (uv tool install)
```sh
uv tool install --python 3.13 mflux
```
---
### π¨ Models
MFLUX supports the following model families. They have different strengths and weaknesses; see each modelβs README for full usage details.
| Model | Release date | Size | Type | Training | Description |
| --- | --- | --- | --- | --- | --- |
|[Z-Image](src/mflux/models/z_image/README.md) | Nov 2025 | 6B | Distilled & Base | Yes | Fast, small, very good quality and realism. |
|[FLUX.2](src/mflux/models/flux2/README.md) | Jan 2026 | 4B & 9B | Distilled & Base | Yes | Fastest + smallest with very good qaility and edit capabilities. |
|[FIBO](src/mflux/models/fibo/README.md) | Oct 2025+ | 8B | Distilled & Base | No | Very good JSON-based prompt understanding. Has edit capabilities. |
|[SeedVR2](src/mflux/models/seedvr2/README.md) | Jun 2025 | 3B & 7B | β | No | Best upscaling model. |
|[Qwen Image](src/mflux/models/qwen/README.md) | Aug 2025+ | 20B | Base | No | Large model (slower); strong prompt understanding and world knowledge. Has edit capabilities |
|[Depth Pro](src/mflux/models/depth_pro/README.md) | Oct 2024 | β | β | No | Very fast and accurate depth estimation model from Apple. |
|[FLUX.1](src/mflux/models/flux/README.md) | Aug 2024 | 12B | Distilled & Base | No (legacy) | Legacy option with decent quality. Has edit capabilities with 'Kontext' model and upscaling support via ControlNet |
---
### β¨ Features
**General**
- Quantization and local model loading
- LoRA support (multi-LoRA, scales, library lookup)
- Metadata export + reuse, plus prompt file support
**Model-specific highlights**
- Text-to-image and image-to-image generation.
- LoRA finetuning
- In-context editing, multi-image editing, and virtual try-on
- ControlNet (Canny), depth conditioning, fill/inpainting, and Redux
- Upscaling (SeedVR2 and Flux ControlNet)
- Depth map extraction and FIBO prompt tooling (VLM inspire/refine)
See the [common README](src/mflux/models/common/README.md) for detailed usage and examples, and use the model section above to browse specific models and capabilities.
> [!NOTE]
> As MFLUX supports a wide variety of CLI tools and options, the easiest way to navigate the CLI in 2026 is to use a coding agent (like [Cursor](https://cursor.com), [Claude Code](https://www.anthropic.com/claude-code), or similar). Ask questions like: βCan you help me generate an image using z-image?β
---
### π± Related projects
- [MindCraft Studio](https://themindstudio.cc/mindcraft#models) by [@shaoju](https://github.com/shaoju)
- [Mflux-ComfyUI](https://github.com/raysers/Mflux-ComfyUI) by [@raysers](https://github.com/raysers)
- [MFLUX-WEBUI](https://github.com/CharafChnioune/MFLUX-WEBUI) by [@CharafChnioune](https://github.com/CharafChnioune)
- [mflux-fasthtml](https://github.com/anthonywu/mflux-fasthtml) by [@anthonywu](https://github.com/anthonywu)
- [mflux-streamlit](https://github.com/elitexp/mflux-streamlit) by [@elitexp](https://github.com/elitexp)
---
### π Acknowledgements
MFLUX would not be possible without the great work of:
- The MLX Team for [MLX](https://github.com/ml-explore/mlx) and [MLX examples](https://github.com/ml-explore/mlx-examples)
- Black Forest Labs for the [FLUX project](https://github.com/black-forest-labs/flux)
- Bria for the [FIBO project](https://huggingface.co/briaai/FIBO)
- Tongyi Lab for the [Z-Image project](https://tongyi-mai.github.io/Z-Image-blog/)
- Qwen Team for the [Qwen Image project](https://qwen.ai/blog?id=a6f483777144685d33cd3d2af95136fcbeb57652&from=research.research-list)
- ByteDance, @numz and @adrientoupet for the [SeedVR2 project](https://github.com/numz/ComfyUI-SeedVR2_VideoUpscaler)
- Hugging Face for the [Diffusers library implementations](https://github.com/huggingface/diffusers)
- Depth Pro authors for the [Depth Pro model](https://github.com/apple/ml-depth-pro?tab=readme-ov-file#citation)
- The MLX community and all [contributors and testers](https://github.com/filipstrand/mflux/graphs/contributors)
---
### βοΈ License
This project is licensed under the [MIT License](LICENSE).