How do I finetune the attention layers of the unet?

#3
by Meeeeq - opened

There are optimized dreambooth repositories, but I don't know if I have access to this feature. Do you have any published code that demontrates this finetuning the attention layers? Thx

Also curious...

@Meeeeq check my implementation

https://github.com/kopyl/diffusers/blob/main/examples/text_to_image/train_text_to_image.py

Just add --train_attention_only argument when run train_text_to_image.py

Now i'm curious why did the author trained attention-only first then full unet instrad of training full unet from the beginning...

Sign up or log in to comment