Now faster and easier to install
This workflow uses a small baseline generation using the 14B image to video model, followed by upscaling, and then smoothing out the result using the 5B model.
This lets you test prompts and iterate quicker on the base generation before upscaling to a final resolution.
Links for all the required models and where to put them are now included in the workflow.
FAQ
Do I need both Wan 2.1 and 2.2 VAEs?
Yes. The 2.2 VAE only works with the 5b model (confusing, I know). Make sure the main section loads the 2.1 VAE, and the upscale section loads the 2.2 VAE.
Its frozen on VAE decode
The second vae decode can take a long time. Just be patient.
Description
FAQ
Comments (8)
Thanks for this, trying it out now. How does seeding work with the multiple KSampler nodes? Do they need to match or is it just in the initial sampler?
I'm not quite sure what you mean by seeding. The second Ksampler is basically entirely separate from the first. The output of the first generation is upscaled, and then VAE encoded for the t2v model to run a video-to-video process to smooth out. You could use any seed for the second ksampler, since its it's own whole process.
@HazardAI Cool thanks! wasn't sure since the seeds for both were fixed with the same value. 80085 lol
Oh I see. I keep the seed fixed because it makes it easier to make changes without having to re-run the entire workflow. For example, changing the denoise strength on the second half, or changing the frame interpolation settings for a slo-mo effect.
Where is the resolution setting located? I've looked around a bit but I can't find it. English isn't my first language, and I've only been able to find the resolution settings for images
The resolution for the initial generation is set when loading the source image in the node, "Load and Resize Image". The size of the upscaled video is set by the "Resize Image" node in the "Prep Video" section. Both parts use the image resolution to set the generation resolution.