Files changed (1) hide show
  1. README.md +2 -80
README.md CHANGED
@@ -1,81 +1,3 @@
1
- ---
2
- pipeline_tag: image-to-video
3
- license: other
4
- license_name: stable-video-diffusion-nc-community
5
- license_link: LICENSE
6
- ---
7
 
8
- # Stable Video Diffusion Image-to-Video Model Card
9
-
10
- <!-- Provide a quick summary of what the model is/does. -->
11
- ![row01](output_tile.gif)
12
- Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
13
-
14
- ## Model Details
15
-
16
- ### Model Description
17
-
18
- (SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning.
19
- This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from [SVD Image-to-Video [14 frames]](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid).
20
- We also finetune the widely used [f8-decoder](https://huggingface.co/docs/diffusers/api/models/autoencoderkl#loading-from-the-original-format) for temporal consistency.
21
- For convenience, we additionally provide the model with the
22
- standard frame-wise decoder [here](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/blob/main/svd_xt_image_decoder.safetensors).
23
-
24
-
25
- - **Developed by:** Stability AI
26
- - **Funded by:** Stability AI
27
- - **Model type:** Generative image-to-video model
28
- - **Finetuned from model:** SVD Image-to-Video [14 frames]
29
-
30
- ### Model Sources
31
-
32
- For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models),
33
- which implements the most popular diffusion frameworks (both training and inference).
34
-
35
- - **Repository:** https://github.com/Stability-AI/generative-models
36
- - **Paper:** https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets
37
-
38
-
39
- ## Evaluation
40
- ![comparison](comparison.png)
41
- The chart above evaluates user preference for SVD-Image-to-Video over [GEN-2](https://research.runwayml.com/gen2) and [PikaLabs](https://www.pika.art/).
42
- SVD-Image-to-Video is preferred by human voters in terms of video quality. For details on the user study, we refer to the [research paper](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets)
43
-
44
- ## Uses
45
-
46
- ### Direct Use
47
-
48
- The model is intended for research purposes only. Possible research areas and tasks include
49
-
50
- - Research on generative models.
51
- - Safe deployment of models which have the potential to generate harmful content.
52
- - Probing and understanding the limitations and biases of generative models.
53
- - Generation of artworks and use in design and other artistic processes.
54
- - Applications in educational or creative tools.
55
-
56
- Excluded uses are described below.
57
-
58
- ### Out-of-Scope Use
59
-
60
- The model was not trained to be factual or true representations of people or events,
61
- and therefore using the model to generate such content is out-of-scope for the abilities of this model.
62
- The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).
63
-
64
- ## Limitations and Bias
65
-
66
- ### Limitations
67
- - The generated videos are rather short (<= 4sec), and the model does not achieve perfect photorealism.
68
- - The model may generate videos without motion, or very slow camera pans.
69
- - The model cannot be controlled through text.
70
- - The model cannot render legible text.
71
- - Faces and people in general may not be generated properly.
72
- - The autoencoding part of the model is lossy.
73
-
74
-
75
- ### Recommendations
76
-
77
- The model is intended for research purposes only.
78
-
79
- ## How to Get Started with the Model
80
-
81
- Check out https://github.com/Stability-AI/generative-models
 
1
+ from diffusers import DiffusionPipeline
 
 
 
 
 
2
 
3
+ pipeline = DiffusionPipeline.from_pretrained("thingthatis/stable-video-diffusion-img2vid-xt")