UPDATE!
download my ComfyUI Build https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.10.0_cu130_cp313_sageattention_triton/tree/main , also u need download and install CUDA 13.0 (https://developer.nvidia.com/cuda-13-0-0-download-archive) and VS Code (https://visualstudio.microsoft.com/downloads/)
My TG Channel - https://t.me/StefanFalkokAI
My TG Chat - https://t.me/+y4R5JybDZcFjMjFi
Also big thanks to
for helping with workflows in past!
Hi! I introduce my working workflows with Wan 2.2 generation video for ComfyUI
I have included 4 workflows such as t2v, i2v, v2v, flf2v (firstlastframe) and foley audio
You need to have Wan 2.2 10 steps models (https://huggingface.co/StefanFalkok/Wan_2.2_10steps/tree/main), clip (https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/text_encoders/umt5_xxl_fp16.safetensors) and vae (wan 2.1 vae)
GGUF Wan 2.2 10 steps models
https://huggingface.co/StefanFalkok/Wan_2.2_10steps_GGUF
Leave comments if you have trouble or you found the problem with workflows
Description
FAQ
Comments (30)
How do you adjust frame rate or speed in this workflow? Many of my generations are the dreaded slowmo videos. Thanks!
Hi. It depends what do you generate. In my case I set 81 frames in Latent settings and 16 framerate in output result. I don't recommend set more than 81 frame because the result can be unstable. Maybe you need write prompts better to avoid slowmo in generations
Also I recommend use only Lightx t2v rank 256 bf16 Lora to avoid slowmo generations instead 4steps loras
@Stefan_Falkok does the light t2v lora still apply even if im only trying to generate i2v?
@ImLearningPlsBeNice yes, it does. Lightx T2V works perfect with i2v wan
Thanks for this OP. Your workflow with the uni_pc sampler works really well for 5s videos, so I tried generating 8 second videos using the RES4LYF samplers but nothing has worked so far. What settings and nodes do you recommend for creating 8 second videos? I mainly use i2v
honestly, wan 2.2 doesn't work correctly more than 81 frames (more 5 seconds). So i recomend generate videos in 81 frames. And I want update my workflows to speed up clip and vae processing
why isnt audio working? i dont see any mmaudio nodes implemented?
maybe because I included workflow with hunyuan video foley, not mmaudio
Thanks for the workflow. But i get this error:
KSamplerAdvanced
too many values to unpack (expected 4)
Hi, it's a MultiGPU problem and soon I'll update workflow with multigpu fix
But for you I put distorch_2.py to fix the problem. Read the instructions
https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.9.1_cu130_cp313_sageattention_triton
Great workflow,thank you. But I have this error, I tried many ways, still can't fix it:# ComfyUI Error Report ## Error Details - Node ID: 185 - Node Type: KSamplerAdvanced - Exception Type: RuntimeError - Exception Message: Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 48, 21, 72, 128] to have 36 channels, but got 48 channels instead.
hi. First thing you need doing - update ComfUI-MultiGPU node. Secondly - i recomend dowload text encoder bf 16 from https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders. If you have this encoder - skip step. Maybe it helps for you
also update ComfyUI
@Stefan_Falkok Thanks for your reply. As you told,I updated ComfyUI, ComfUI-MultiGPU node and everything. And I already have this encoder, but I still have the same problem.
@PatrickChen maybe you have problem with python environment
Download include.zip archive - all files from this archive put to ComfyUI/python_embeded/Include . I hope it will work good
If it doesn't, so install my ComfyUI build, put include file in the same path, and update ComfyUI with custom nodes
@Stefan_Falkok I did everything you said, but I still got same problem. Its really bother me.
@PatrickChen what kind of models do you use in the workflow? Wrire models in unetloader, clip and vae
Wan2_2-I2V-A14B-HIGH_fp8_e4m3fn_scaled_KJ.safetensors,wan_2.1_vae.safetensors,umt5_xxl_fp8_e4m3fn_scaled.safetensors. basically all of your recommend. Its so weird.
@PatrickChen aaah, i see the problem. Don't use fp8 clip, Use only fp16 clip from https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders and workflow will work
@Stefan_Falkok I tried both clip, FP16 and FP8, still got same problem.
@PatrickChen oh. Then it's in Wan model problem maybe( Try use another wan model like https://civitai.com/models/2086218/wan-22-10-steps-t2v-and-i2v-fp8-gguf-q80-q4km-models - it should working
@Stefan_Falkok I tired Wan_2.2_I2V_HighNoise_10steps_fp8.safetensors/Wan_2.2_I2V_LowNoise_10steps_fp8.safetensors both download from your huggingface, also GGUF, same problem again. Now, I‘m going to completely re-install my comfyui, hope It will fix.
@Stefan_Falkok Fuck it! I fully re-installed comfyui based on your build, still not work, exactly same problem. Perhaps your excellent workflow just won't run on my computer. 😂😂😂
@PatrickChen soon I'll update workflow. but firsly - download my workflows from flux2 or qwen image 2512 and read instruction about distorch_2.py - maybe it can help
@Stefan_Falkok other workflows of your, working great on my computer, also I already fixed distorch_2.py issue, just wan2.2 v2v correct.json has this error.
@PatrickChen then just waiting when I'll update workflow. Have fun
@Stefan_Falkok thanks
Great Work! Огромное спасибо за такую проделанную работу, где все описано так подробно! Уверен, что ты из русскоговорящих
приятно такое читать)