having trouble making this work with loras
#9 opened about 3 hours ago
by
swwww55
Update README.md
2
#8 opened about 6 hours ago
by
Jawed020
Possible to work with 8GB VRAM and 16GB RAM?
2
#7 opened about 20 hours ago
by
krigeta
How is this being trained?
1
#6 opened 1 day ago
by
cuifeng
Do you have plans to support the stabilizability/table diffusion-3.5-large or Jimmy Carter/LibreFLUX models for this model
1
#5 opened 1 day ago
by
michaelj
Even the 8B model makes many consumer-grade graphics cards unable to run smoothly
4
#3 opened 1 day ago
by
jian2023
A theoretical formulation of how this works based on a paper by Sakana AI
1
#2 opened 2 days ago
by
NagaSaiAbhinay
flux 4B version
5
#1 opened 2 days ago
by
ayahareedy