Check out our quickstart Guide! https://education.civitai.com/quickstart-guide-to-stable-video-diffusion/
The base img2vid model was trained to generate 14 frames at 1024x576, uses less VRAM than the...
img2vid-xt model, trained to generate 25 frames at 1024x576.
img2vid-xt-1.1, the latest version, is finetuned to provide enhanced outputs for the following settings;
Width: 1024
Height: 576
Frames: 25
Motion Bucket ID: 127
FPS: 6
Augmentation Level: 0.00
Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it. Developed by: Stability AI
Description
This model was trained to generate 14 frames at resolution 576x1024
FAQ
Comments (11)
Nice, so, how it works?
Does this run in Automatic1111?
Hey all! I wrote a Getting Started Guide!
You only need 8GB of VRAM to produce these in ComfyUI!
I think there's something wrong with the metadata. Both show up as xt when I download them using API. (Via Stability Matrix.)
Unfortunately I cannot use the workflows/ nodes on a Apple Mac M2 Max (or any MPS architecture), because i get the error RuntimeError: Conv3D is not supported on MPS. There are already some issues on github concerning the issue and work in this area seems to be "in progress".
The makers of comfyUI also provide a workflow for using SDV on their examples page.
Where can I learn to use this?
Thank you.
comfyui
Stable Video Diffusion - RELEASED! - Local Install Guide
Is this for SD 1.5 or SDXL? And does it require comfy UI? I'm running automatic 111 on SD1.5.
I'm so confused, can i use this on A1111 v1.7.0? can anyone help?