dverdu-freepik's picture
Fix example
1e7d36d verified
|
raw
history blame
2.86 kB
metadata
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/LICENSE.md
base_model:
  - black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
  - flux
  - text-to-image

Flux.1 Lite

We want to announce the alpha version of our new distilled Flux.1 Lite model, an 8B parameter transformer model distilled from the original Flux1. dev model.

Our goal is to further reduce FLUX.1-dev transformer parameters up to 24Gb to make it compatible with most of GPU cards.

News🔥🔥🔥

Try our Hugging Face demos:

Flux.1 Lite demo host on 🤗 flux.1-lite

Introduction

Hyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques. In this repository, we release the models distilled from FLUX.1-dev, SD3-Medium, SDXL Base 1.0 and Stable-Diffusion v1-5

Checkpoints

  • flux.1-lite-8B-alpha.safetensors: Transformer checkpoint, in Flux original format.

Text-to-Image Usage

FLUX.1-dev-related models

import torch
from diffusers import FluxPipeline

base_model_id = "Freepik/flux.1-lite-8B-alpha"
torch_dtype = torch.bfloat16
device = "cuda"

# Load the pipe
model_id = "Freepik/flux.1-lite-8B-alpha"
pipe = FluxPipeline.from_pretrained(
    model_id, torch_dtype=torch_dtype
).to(device)

# Inference
prompt = "Scene inspired by 2000 comedy animation, a glowing green alien whose fluorescent skin emits light, standing in a dark purple forest. The alien is holding a large sign that reads 'LITE 8B ALPHA' in bold letters. The forest around is shadowy, with tall, eerie trees and mist rolling in. The alien radiates a soft, supernatural glow, illuminating the surroundings, creating a stark contrast between light and darkness. Style of an old comic, with flat colors, halftone shading, and a slightly weathered, vintage texture."

guidance_scale = 3.5  # Important to keep guidance_scale to 3.5
n_steps = 28
seed = 11

with torch.inference_mode():
    image = pipe(
        prompt=prompt,
        generator=torch.Generator(device="cpu").manual_seed(seed),
        num_inference_steps=n_steps,
        guidance_scale=guidance_scale,
        height=1024,
        width=1024,
    ).images[0]
image.save("output.png")