This is an improved version of EechiZero's workflow found here: https://civarchive.com/models/1822764/wan-22-i2v-gguf-compact-speed-wf-or-lightning-lora-44-steps
Shoutout to them and their work, this is the best wan 2.2 workflow I've tried yet.
I've made a few improvements for this version:
Replaced depreciated resize image node
Added size selector
Changed resize mode from keep aspect ratio -> crop (this makes images way less blurry)
Added interpolation
Added upscale
Disabled sage attention (conflicts for me, some might be able to re-enable)
Wan 2.1 loras also work with this workflow, but will causes warnings. Put the lora in both the high and low lora sample nodes and set the strength high.
Links:
Lightx2v Actual Kijai Lora speed for 2.2: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Wan22-Lightning
GGUF: https://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF/tree/main
or https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main
You need a High model and a Low model with the same quantization. The VAE is the same as WAN 2.1.
Make sure to update your ComfyUI.
Description
FAQ
Comments (4)
Was a pain in the ass to get the Tensor upscale node installed but was well worth it, end results are amazing!
Do you use Sage Attention? I'm sitting here looking through the workflow thinking... gut feel says I don't wanna install TensorRT. Wasn't that a pain? I just got Triton and Sage working and the speed increase was well worth it. The poster says he has Sage disabled due to conflicts. I wonder if it was TensorRT.
@parallelepipedon yes I use sage attention (which was another pain in the ass to setup lol), one thing I did have to do to get the Tensor upscale to work is copy some files from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA to another sub directory in the same folder as it was struggling to find some files, the location I had to copy them was in the error show in the console.
Can you add WanVideoNAG since it helps a lot with weighting negative prompts
