dverdu-freepik commited on
Commit
37e9e0c
β€’
1 Parent(s): f398ebc

update readme

Browse files
README.md CHANGED
@@ -15,29 +15,13 @@ tags:
15
 
16
  # Flux.1 Lite
17
 
18
- We are thrilled to announce the alpha release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model.
19
-
20
- Our goal? To distill FLUX.1-dev further until we achieve to reduce the parameters to just 24 GB, so it can run smoothly on most consumer-grade GPU cards, making high-quality AI models accessible to everyone.
21
 
22
  ![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)
23
 
24
- ## Motivation
25
-
26
- As stated by other members of the community like [Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/), it seems that blocks of the Flux1.dev transformer have a different contribution to the final image generation. To explore this, we analyzed the Mean Squared Error (MSE) between the input and output of each block, revealing significant variability.
27
-
28
- Our findings? Not all blocks contribute equally. The results are striking: skipping just one of the early MMDIT blocks can significantly impact model performance, whereas skipping the rest of the blocks do not have a significant impact over the final image quality.
29
-
30
- ![Flux.1 Lite generated image](./sample_images/skip_blocks/generated_img.png)
31
- ![MSE MMDIT](./sample_images/skip_blocks/mse_mmdit_img.png)
32
- ![MSE DIT](./sample_images/skip_blocks/mse_dit_img.png)
33
-
34
- Furthermore, as displayed in the following image, only when you skip one of the first MMDIT blocks, the performance of the model severely impacts the model's performance.
35
- ![Skip one MMDIT block](./sample_images/skip_blocks/skip_one_MMDIT_block.png)
36
- ![Skip one DIT block](./sample_images/skip_blocks/skip_one_DIT_block.png)
37
-
38
- ## Text-to-Image Usage
39
 
40
- Flux.1 Lite is ready to unleash your creativity! For the best results, we recommend using a `guidance_scale` of 3.5 and setting `n_steps` between 22 and 30.
41
 
42
  ```python
43
  import torch
@@ -66,27 +50,48 @@ with torch.inference_mode():
66
  generator=torch.Generator(device="cpu").manual_seed(seed),
67
  num_inference_steps=n_steps,
68
  guidance_scale=guidance_scale,
69
- height=1024,s
70
  width=1024,
71
  ).images[0]
72
  image.save("output.png")
73
  ```
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  ## ComfyUI
 
76
  We've also crafted a ComfyUI workflow to make using Flux.1 Lite even more seamless! Find it in `comfy/flux.1-lite_workflow.json`.
77
  ![ComfyUI workflow](./comfy/flux.1-lite_workflow.png)
78
 
79
- ## Checkpoints
80
- * `flux.1-lite-8B-alpha.safetensors`: Transformer checkpoint, in Flux original format.
81
- * `transformers/`: Contains distilled 8B transformer model, in diffusers format.
82
 
83
- ## πŸ€— Hugging Face space:
84
- Flux.1 Lite demo host on [πŸ€— flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)
85
 
86
  ## πŸ”₯ News πŸ”₯
87
- * Oct.18, 2024. Alpha 8B checkpoint and comparison demo πŸ€— (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
 
88
 
89
  ## Citation
 
90
  If you find our work helpful, please cite it!
91
 
92
  ```bibtex
 
15
 
16
  # Flux.1 Lite
17
 
18
+ We are thrilled to announce the alpha release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model. This version uses 7 GB less RAM and runs 23% faster while maintaining the same precision (bfloat16) as the original model.
 
 
19
 
20
  ![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)
21
 
22
+ ## Text-to-Image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ Flux.1 Lite is ready to unleash your creativity! For the best results, we strongly **recommend using a `guidance_scale` of 3.5 and setting `n_steps` between 22 and 30**.
25
 
26
  ```python
27
  import torch
 
50
  generator=torch.Generator(device="cpu").manual_seed(seed),
51
  num_inference_steps=n_steps,
52
  guidance_scale=guidance_scale,
53
+ height=1024,
54
  width=1024,
55
  ).images[0]
56
  image.save("output.png")
57
  ```
58
 
59
+ ## Motivation
60
+
61
+ Inspired by [Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/) findings, we analyzed the mean squared error (MSE) between the input and output of each block to quantify their contribution to the final result, revealing significant variability.
62
+
63
+ As Ostris pointed out, not all blocks contribute equally. While skipping just one of the early MMDIT blocks can significantly impact model performance, skipping the rest of the blocks does not have a significant impact over the final image quality.
64
+
65
+ ![Flux.1 Lite generated image](./sample_images/skip_blocks/generated_img.png)
66
+ ![MSE MMDIT](./sample_images/skip_blocks/mse_mmdit_img.png)
67
+ ![MSE DIT](./sample_images/skip_blocks/mse_dit_img.png)
68
+
69
+ Furthermore, as displayed in the following image, only when you skip one of the first MMDIT blocks, the performance of the model severely impacts the model's performance.
70
+
71
+ ![Skip one MMDIT block](./sample_images/skip_blocks/skip_one_MMDIT_block.png)
72
+ ![Skip one DIT block](./sample_images/skip_blocks/skip_one_DIT_block.png)
73
+
74
+ ## Future work
75
+
76
+ Stay tuned! Our goal is to distill FLUX.1-dev further until it can run smoothly on 24 GB consumer-grade GPU cards, maintaining its original precision (bfloat16), and running even faster, making high-quality AI models accessible to everyone.
77
+
78
  ## ComfyUI
79
+
80
  We've also crafted a ComfyUI workflow to make using Flux.1 Lite even more seamless! Find it in `comfy/flux.1-lite_workflow.json`.
81
  ![ComfyUI workflow](./comfy/flux.1-lite_workflow.png)
82
 
83
+ The safetensors checkpoint is available here: [flux.1-lite-8B-alpha.safetensors](flux.1-lite-8B-alpha.safetensors)
84
+
85
+ ## Try it out at Freepik!
86
 
87
+ Our [AI generator](https://www.freepik.com/pikaso/ai-image-generator) is now powered by Flux.1 Lite!
 
88
 
89
  ## πŸ”₯ News πŸ”₯
90
+
91
+ * Oct 23, 2024. Alpha 8B checkpoint is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
92
 
93
  ## Citation
94
+
95
  If you find our work helpful, please cite it!
96
 
97
  ```bibtex
sample_images/freepik_logo.png DELETED

Git LFS Details

  • SHA256: 97936ddc0d528f81299ca2c56c791a1e3c03473dbc663dc0592a353d310af1b6
  • Pointer size: 130 Bytes
  • Size of remote file: 20.7 kB
sample_images/skip_blocks/skip_one_DIT_block.png CHANGED

Git LFS Details

  • SHA256: 8a0f7cc105a389e6d79da74dd4913147a7fc9f7ea59b8e96ef6fb8d5da826e2d
  • Pointer size: 133 Bytes
  • Size of remote file: 26.1 MB

Git LFS Details

  • SHA256: 17a1255b01e0e3b08720bf5439d2b19acdd7241a824baf9995d41da380c840c8
  • Pointer size: 133 Bytes
  • Size of remote file: 25.3 MB
sample_images/skip_blocks/skip_one_MMDIT_block.png CHANGED

Git LFS Details

  • SHA256: 585f7b17485e890f197d2b3e75e8c70962aa676208eb74f555c19cbaba3cfd31
  • Pointer size: 133 Bytes
  • Size of remote file: 11.9 MB

Git LFS Details

  • SHA256: 19c528100013e6d28748fd7767d80758ffac51f23a8bf0e3784974e0d44a6621
  • Pointer size: 133 Bytes
  • Size of remote file: 11.6 MB