Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/black-forest-labs/flux
Official inference repo for FLUX.1 models
https://github.com/black-forest-labs/flux
Last synced: about 1 month ago
JSON representation
Official inference repo for FLUX.1 models
- Host: GitHub
- URL: https://github.com/black-forest-labs/flux
- Owner: black-forest-labs
- License: apache-2.0
- Created: 2024-08-01T09:04:19.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2024-08-05T10:11:15.000Z (about 1 month ago)
- Last Synced: 2024-08-05T10:30:50.818Z (about 1 month ago)
- Language: Python
- Size: 4.78 MB
- Stars: 3,449
- Watchers: 33
- Forks: 194
- Open Issues: 30
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ai-game-devtools - Flux - to-image and image-to-image with our Flux latent rectified flow transformers. | | | Image | (<span id="image">Image</span> / <span id="tool">Tool (AI LLM)</span>)
- awesome - black-forest-labs/flux - Official inference repo for FLUX.1 models (Python)
- Awesome-AITools - Github - forest-labs/flux?style=social)|免费| (精选文章 / AI图像创作)
- StarryDivineSky - black-forest-labs/flux - Ultra 等。这一新模型不仅继承了 Stable Diffusion 的优良基因,更在多个方面实现了重大突破。 (其他_机器视觉 / 网络服务_其他)
- awesome-flux-ai - Flux AI - source text-to-image AI models developed by Black Forest Labs. The platform aims to advance state-of-the-art generative deep learning models for media, pushing the boundaries of creativity, efficiency, and diversity. (About Flux AI)
README
# FLUX
by Black Forest Labs: https://blackforestlabs.ai![grid](assets/grid.jpg)
This repo contains minimal inference code to run text-to-image and image-to-image with our Flux latent rectified flow transformers.
### Inference partners
We are happy to partner with [Replicate](https://replicate.com/) and [FAL](https://fal.ai/). You can sample our models using their services.
Below we list relevant links.Replicate:
- https://replicate.com/collections/flux
- https://replicate.com/black-forest-labs/flux-pro
- https://replicate.com/black-forest-labs/flux-dev
- https://replicate.com/black-forest-labs/flux-schnellFAL:
- https://fal.ai/models/fal-ai/flux-pro
- https://fal.ai/models/fal-ai/flux/dev
- https://fal.ai/models/fal-ai/flux/schnell## Local installation
```bash
cd $HOME && git clone https://github.com/black-forest-labs/flux
cd $HOME/flux
python3.10 -m venv .venv
source .venv/bin/activate
pip install -e '.[all]'
```### Models
We are offering three models:
- `FLUX.1 [pro]` the base model, available via API
- `FLUX.1 [dev]` guidance-distilled variant
- `FLUX.1 [schnell]` guidance and step-distilled variant| Name | HuggingFace repo | License | md5sum |
|-------------|-------------|-------------|-------------|
| `FLUX.1 [schnell]` | https://huggingface.co/black-forest-labs/FLUX.1-schnell | [apache-2.0](model_licenses/LICENSE-FLUX1-schnell) | a9e1e277b9b16add186f38e3f5a34044 |
| `FLUX.1 [dev]` | https://huggingface.co/black-forest-labs/FLUX.1-dev| [FLUX.1-dev Non-Commercial License](model_licenses/LICENSE-FLUX1-dev) | a6bd8c16dfc23db6aee2f63a2eba78c0 |
| `FLUX.1 [pro]` | Only available in our API. |The weights of the autoencoder are also released under [apache-2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) and can be found in either of the two HuggingFace repos above. They are the same for both models.
## Usage
The weights will be downloaded automatically from HuggingFace once you start one of the demos. To download `FLUX.1 [dev]`, you will need to be logged in, see [here](https://huggingface.co/docs/huggingface_hub/guides/cli#huggingface-cli-login).
If you have downloaded the model weights manually, you can specify the downloaded paths via environment-variables:
```bash
export FLUX_SCHNELL=
export FLUX_DEV=
export AE=
```For interactive sampling run
```bash
python -m flux --name --loop
```
Or to generate a single sample run
```bash
python -m flux --name \
--height --width \
--prompt ""
```We also provide a streamlit demo that does both text-to-image and image-to-image. The demo can be run via
```bash
streamlit run demo_st.py
```We also offer a Gradio-based demo for an interactive experience. To run the Gradio demo:
```bash
python demo_gr.py --name flux-schnell --device cuda
```Options:
- `--name`: Choose the model to use (options: "flux-schnell", "flux-dev")
- `--device`: Specify the device to use (default: "cuda" if available, otherwise "cpu")
- `--offload`: Offload model to CPU when not in use
- `--share`: Create a public link to your demoTo run the demo with the dev model and create a public link:
```bash
python -m demo_gr.py --name flux-dev --share
```## Diffusers integration
`FLUX.1 [schnell]` and `FLUX.1 [dev]` are integrated with the [🧨 diffusers](https://github.com/huggingface/diffusers) library. To use it with diffusers, install it:
```shell
pip install git+https://github.com/huggingface/diffusers.git
```Then you can use `FluxPipeline` to run the model
```python
import torch
from diffusers import FluxPipelinemodel_id = "black-forest-labs/FLUX.1-schnell" #you can also use `black-forest-labs/FLUX.1-dev`
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU powerprompt = "A cat holding a sign that says hello world"
seed = 42
image = pipe(
prompt,
output_type="pil",
num_inference_steps=4, #use a larger number if you are using [dev]
generator=torch.Generator("cpu").manual_seed(seed)
).images[0]
image.save("flux-schnell.png")
```To learn more check out the [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) documentation
## API usage
Our API offers access to the pro model. It is documented here:
[docs.bfl.ml](https://docs.bfl.ml/).In this repository we also offer an easy python interface. To use this, you
first need to register with the API on [api.bfl.ml](https://api.bfl.ml/), and
create a new API key.To use the API key either run `export BFL_API_KEY=` or provide
it via the `api_key=` parameter. Is is also expected that you
have installed the package as above.Usage from python:
```python
from flux.api import ImageRequest# this will create an api request directly but not block until the generation is finished
request = ImageRequest("A beautiful beach")
# or: request = ImageRequest("A beautiful beach", api_key="your_key_here")# any of the following will block until the generation is finished
request.url
# -> https:<...>/sample.jpg
request.bytes
# -> b"..." bytes for the generated image
request.save("outputs/api.jpg")
# saves the sample to local storage
request.image
# -> a PIL image
```Usage from the command line:
```bash
$ python -m flux.api --prompt="A beautiful beach" url
https:<...>/sample.jpg# generate and save the result
$ python -m flux.api --prompt="A beautiful beach" save outputs/api# open the image directly
$ python -m flux.api --prompt="A beautiful beach" image show
```