Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
sayakpaulΒ 
posted an update about 23 hours ago
Post
688
Did some little experimentation to resize pre-trained LoRAs on Flux. I explored two themes:

* Decrease the rank of a LoRA
* Increase the rank of a LoRA

The first one is helpful in reducing memory requirements if the LoRA is of a high rank, while the second one is merely an experiment. Another implication of this study is in the unification of LoRA ranks when you would like to torch.compile() them.

Check it out here:
sayakpaul/flux-lora-resizing

It looks like we could convert large model differences to Low Rank LoRA for fast switching.