Instructions for deployment/use?

#4
by sanctimon - opened

It is unclear how to make this work in ComfyUI or any other interface.

You basically just put the .safetensors file into "models/clip" (in case of ComfyUI) and then select the model instead of "CLIP-L". For Flux, this is straightforward (as the diffusion model is separate, anyway, and you always load CLIP-L and T5 with a separate Text Encoder node).
For e.g. SDXL, you have to "unpack" the Text Encoders (as they are wrapped with the diffusion U-Net and VAE) via the "CLIPSave" node. Then, you also put CLIP-G into the "models/clip" folder, and select that + my model for CLIP-L. See this screenshot for details:
Screenshot 2024-09-04 120533.png

I reached this model thanks to the YouTube video. I'm kind of new to this, and I've been using Replicate for my Flux image generation workflows. @zer0int , do you have any instructions on how to use this model in Replicate?

Thanks for the link to the video! :)
Unfortunately, I am not familiar with Replicate; do they let you upload your own models (it's a cloud compute thing, right?) -- Then you should be able to upload the text encoder (as seen in the DualCLIPLoader node for my workflow). If they do not let you change the models / upload a model, you're out of luck in the short term - but seeing as this model got a lot of "downloads and likes", you could try suggesting the model as an alternative option to allow users to use via their API.

Sign up or log in to comment