Now faster and easier to install
This workflow uses a small baseline generation using the 14B image to video model, followed by upscaling, and then smoothing out the result using the 5B model.
This lets you test prompts and iterate quicker on the base generation before upscaling to a final resolution.
Links for all the required models and where to put them are now included in the workflow.
FAQ
Do I need both Wan 2.1 and 2.2 VAEs?
Yes. The 2.2 VAE only works with the 5b model (confusing, I know). Make sure the main section loads the 2.1 VAE, and the upscale section loads the 2.2 VAE.
Its frozen on VAE decode
The second vae decode can take a long time. Just be patient.
Description
Update for Wan 2.2
No smooth pass this time. For 2.2, there isn't a very small model that would be well suited, and with GGUF quants and the 4 step lora, this new workflow runs faster than the Wan 2.1 versions.
FAQ
Comments (10)
there are new Loras to speed up the process https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras. am not the best with comfy Ui but can u add powerlora loader? that way motion loras can be used etc
Thanks for the heads up! So far I can't get those loras to work. They seem to require the lightx2v custom node, and I can't get that to install without blowing up the entire rest of my setup. I've seen some reports that the lightx2v node runs as much as 50% faster, but I'm not sure if that's just in comparison to baseline Wan 2.2, or if it includes other speedups like Sage/Sparge attention.
I'm still testing with various speedups, like torch compile, sparge attention, and block swap. So far nothing has been a significant enough to warrant the additional complexity. I'll definitely keep experimenting though.
@HazardAI check this workflow https://pastebin.com/g19a5seP thats what i found in regards of the new loras
@bakaboy1234 Running that workflow, I get a log about blocks being not loaded from the 4 step loras. It appears to be all of them. That workflow also seems to be using both a Wan 2.1 lightning lora, and a Wan 2.2 version. But if its just skipping loading the new one (which I'm also getting in my workflow when using it with GGUF models) then I think it ends up being functionally the same thing, plus torch compile and patching sage attn.
I'll keep experimenting with the new loras to see if I can get them to work and if they're faster.
FYI, there is a smaller version of wan2.2, It's the 5B.
https://huggingface.co/QuantStack/Wan2.2-TI2V-5B-GGUF/tree/main
And the speed lora for 5B to run in 4-8 steps
I can't run v2v with 14B wan currently on my rig, the wait is just too long, if i want to wait that long i'd rather use the ultimate sd upscaler altogether.
so I'm using the 5B wan currently on my WF, the result still pretty amazing, and it's faster.
I did attempt to use that one for smoothing in the same style as the previous version of this workflow. I didn't notice any real improvement in quality though. And the speed to run a smooth pass on a high resolution video ended up being comparable to just running it at about the same resolution with the 14B version, at least for the resolutions I tests for, (although I do imagine it would save a lot of vram.)
Do you find the quality of the 5B model better than just running the 14B at a lower resolution?
can you share yo workflow?
The workflow works really well! If I want to add and use a LoRA with it, what should I do?
same, should be using lora manager
power lora loaders for high and low already exist in the very left side of the flow, just drop the one you want to add by clicking 'add' under the lightning loras and it'll work.