Edit model card

Finetuning - daehan17/prior-weightupdate

This pipeline was finetuned from kandinsky-community/kandinsky-2-2-prior on the lambdalabs/naruto-blip-captions dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A robot pokemon, 4k photo']:

val_imgs_grid

Pipeline usage

You can use the pipeline like so:

from diffusers import DiffusionPipeline
import torch

pipe_prior = DiffusionPipeline.from_pretrained("daehan17/prior-weightupdate", torch_dtype=torch.float16)
pipe_t2i = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
prompt = "A robot pokemon, 4k photo"
image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple()
image = pipe_t2i(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds).images[0]
image.save("my_image.png")

Training info

These are the key hyperparameters used during training:

  • Epochs: 7
  • Learning rate: 1e-05
  • Batch size: 1
  • Gradient accumulation steps: 4
  • Image resolution: 768
  • Mixed-precision: None

More information on all the CLI arguments and the environment are available on your wandb run page.

Downloads last month
1
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for daehan17/prior-weightupdate

Finetuned
this model

Dataset used to train daehan17/prior-weightupdate