File size: 4,044 Bytes
c0f6c18
 
 
 
 
 
 
 
 
 
 
 
 
e39d576
 
c0f6c18
e39d576
d2d5340
c0f6c18
d2d5340
c0f6c18
e45409a
c0f6c18
e07acca
 
d2d5340
 
 
e07acca
 
 
 
 
f9eaa0b
e07acca
 
 
c0f6c18
 
d2d5340
e45409a
c0f6c18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e07acca
c0f6c18
f9eaa0b
e39d576
 
c0f6c18
 
 
 
e39d576
c0f6c18
 
8ac7e9f
c0f6c18
 
 
e45409a
 
8ac7e9f
d2d5340
8ac7e9f
e45409a
8ac7e9f
e45409a
 
 
f9eaa0b
e07acca
e45409a
bcf3de6
e45409a
 
e07acca
 
 
05b3524
e07acca
 
 
 
 
 
05b3524
e45409a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/LICENSE.md
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
- flux
- text-to-image
---

![Flux.1 Lite](./sample_images/flux1-lite-8B_sample.png)

# Flux.1 Lite

We are thrilled to announce the alpha release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model.

Our goal? To distill FLUX.1-dev into a lighter model, reducing the parameters to just 24 GB, so it can run smoothly on most consumer-grade GPU cards, making high-quality AI models accessible to everyone.

![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)

## Motivation

As stated by other members of the community like [Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/), it seems that blocks of the Flux1.dev transformer have a different contribution to the final image generation. To explore this, we analyzed the Mean Squared Error (MSE) between the input and output of each block, revealing significant variability. 

Our findings? Not all blocks are created equal. By strategically skipping less impactful blocks, we've managed to achieve incredible efficiency gains without compromising on quality. The results are striking: skipping just one of the early MMDIT blocks can significantly impact model performance, whereas other blocks have a much smaller effect.

![Flux.1 Lite generated image](./sample_images/skip_blocks/generated_img.png)
![MSE MMDIT](./sample_images/skip_blocks/mse_mmdit_img.png)
![MSE DIT](./sample_images/skip_blocks/mse_dit_img.png)

Furthermore, as displayed in the following image, only when you skip one of the first MMDIT blocks, the performance of the model severely impacts the model's performance.
![Skip one MMDIT block](./sample_images/skip_blocks/skip_one_MMDIT_block.png)
![Skip one DIT block](./sample_images/skip_blocks/skip_one_DIT_block.png)

## Text-to-Image Usage

Flux.1 Lite is ready to unleash your creativity! For the best results, we recommend using a `guidance_scale` of 3.5 and setting `n_steps` between 22 and 30.

```python
import torch
from diffusers import FluxPipeline

base_model_id = "Freepik/flux.1-lite-8B-alpha"
torch_dtype = torch.bfloat16
device = "cuda"

# Load the pipe
model_id = "Freepik/flux.1-lite-8B-alpha"
pipe = FluxPipeline.from_pretrained(
    model_id, torch_dtype=torch_dtype
).to(device)

# Inference
prompt = "A close-up image of a green alien with fluorescent skin in the middle of a dark purple forest"

guidance_scale = 3.5  # Keep guidance_scale at 3.5
n_steps = 28
seed = 11

with torch.inference_mode():
    image = pipe(
        prompt=prompt,
        generator=torch.Generator(device="cpu").manual_seed(seed),
        num_inference_steps=n_steps,
        guidance_scale=guidance_scale,
        height=1024,s
        width=1024,
    ).images[0]
image.save("output.png")
```

## ComfyUI
We've also crafted a ComfyUI workflow to make using Flux.1 Lite even more seamless! Find it in `comfy/flux.1-lite_workflow.json`.
![ComfyUI workflow](./comfy/flux.1-lite_workflow.png)

## Checkpoints
* `flux.1-lite-8B-alpha.safetensors`: Transformer checkpoint, in Flux original format.
* `transformers/`: Contains distilled 8B transformer model, in diffusers format.

## 🤗 Hugging Face space: 
Flux.1 Lite demo host on [🤗 flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)

## 🔥 News 🔥
* Oct.18, 2024. Alpha 8B checkpoint and comparison demo 🤗 (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).

## Citation
If you find our work helpful, please cite it!

```bibtex
@article{flux1-lite,
  title={Flux.1 Lite: Distilling Flux1.dev for Efficient Text-to-Image Generation},
  author={Daniel Verdú, Javier Martín},
  email={dverdu@freepik.com, javier.martin@freepik.com},
  year={2024},
}
```