https://github.com/hathibelagal-dev/stable-diffusion-mlx
Run stable diffusion on Mac with MLX
https://github.com/hathibelagal-dev/stable-diffusion-mlx
Last synced: 2 months ago
JSON representation
Run stable diffusion on Mac with MLX
- Host: GitHub
- URL: https://github.com/hathibelagal-dev/stable-diffusion-mlx
- Owner: hathibelagal-dev
- License: gpl-3.0
- Created: 2025-07-09T13:44:42.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-07-09T14:14:07.000Z (3 months ago)
- Last Synced: 2025-07-09T14:49:59.761Z (3 months ago)
- Language: Python
- Size: 32.2 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Stable Diffusion 1.4 in MLX
================NovelAI has released the weights for their older SD1.5-based NovelAI Diffusion V2 anime model! I created this repo to be able to run **nai-anime-v2** on an old Mac using MLX.
So, by default, this repo uses the Unet and VAE of `NovelAI/nai-anime-v2`. The rest of the components come from `CompVis/stable-diffusion-v1-4`.
**Note:** SD 1.4 is particularly strong for generating stylized images and art. It uses the same text encoder as SD 1.5, CLIP ViT-L/14, and was trained on subsets of the LAION-5B dataset, specifically "laion-aesthetics v2 5+" for aesthetic quality.
## Sample Usage
Once you clone this repository and install the requirements, you can run the following command to generate an image:
```bash
SD_PROMPT="flowers, flower field, sunset, no humans" \
SD_NEGATIVE_PROMPT="humans" \
SD_SEED=56 \
python3 t2i_sd.py
```The above command will generate an image named `output.png`. You can use the `-o` parameter to change the output filename.
## Example Outputs
The performance is decent. You can get a good image with `SD_STEPS` set to around 28. Even with `SD_STEPS` set to 50 (default), it only takes about 50 seconds to generate an image.
![]()
![]()
![]()