CivArchive

    UPDATE!

    download my ComfyUI Build https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.10.0_cu130_cp313_sageattention_triton/tree/main , also u need download and install CUDA 13.0 (https://developer.nvidia.com/cuda-13-0-0-download-archive) and VS Code (https://visualstudio.microsoft.com/downloads/)

    My TG Channel - https://t.me/StefanFalkokAI

    My TG Chat - https://t.me/+y4R5JybDZcFjMjFi

    Also big thanks to

    https://t.me/purrykitty

    https://t.me/neural_vault34

    https://t.me/teamArtRaccoon

    for helping with workflows in past!

    Hi! I introduce my working workflows with Wan 2.2 generation video for ComfyUI

    I have included 4 workflows such as t2v, i2v, v2v, flf2v (firstlastframe) and foley audio

    You need to have Wan 2.2 10 steps models (https://huggingface.co/StefanFalkok/Wan_2.2_10steps/tree/main), clip (https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/text_encoders/umt5_xxl_fp16.safetensors) and vae (wan 2.1 vae)

    GGUF Wan 2.2 10 steps models

    https://huggingface.co/StefanFalkok/Wan_2.2_10steps_GGUF

    Leave comments if you have trouble or you found the problem with workflows

    Description

    FAQ

    Comments (30)

    IHaveLearnedOct 25, 2025
    CivitAI

    How do you adjust frame rate or speed in this workflow? Many of my generations are the dreaded slowmo videos. Thanks!

    Stefan_Falkok
    Author
    Oct 25, 2025

    Hi. It depends what do you generate. In my case I set 81 frames in Latent settings and 16 framerate in output result. I don't recommend set more than 81 frame because the result can be unstable. Maybe you need write prompts better to avoid slowmo in generations

    Stefan_Falkok
    Author
    Oct 25, 2025

    Also I recommend use only Lightx t2v rank 256 bf16 Lora to avoid slowmo generations instead 4steps loras

    IHaveLearnedOct 25, 2025

    @Stefan_Falkok does the light t2v lora still apply even if im only trying to generate i2v?

    Stefan_Falkok
    Author
    Oct 25, 2025

    @ImLearningPlsBeNice yes, it does. Lightx T2V works perfect with i2v wan

    qdzeroNov 19, 2025
    CivitAI

    Thanks for this OP. Your workflow with the uni_pc sampler works really well for 5s videos, so I tried generating 8 second videos using the RES4LYF samplers but nothing has worked so far. What settings and nodes do you recommend for creating 8 second videos? I mainly use i2v

    Stefan_Falkok
    Author
    Nov 19, 2025

    honestly, wan 2.2 doesn't work correctly more than 81 frames (more 5 seconds). So i recomend generate videos in 81 frames. And I want update my workflows to speed up clip and vae processing

    Jdoe666Nov 19, 2025· 2 reactions
    CivitAI

    why isnt audio working? i dont see any mmaudio nodes implemented?

    Stefan_Falkok
    Author
    Nov 19, 2025

    maybe because I included workflow with hunyuan video foley, not mmaudio

    kenpachi601674Jan 3, 2026
    CivitAI

    Thanks for the workflow. But i get this error:

    KSamplerAdvanced

    too many values to unpack (expected 4)

    Stefan_Falkok
    Author
    Jan 3, 2026

    Hi, it's a MultiGPU problem and soon I'll update workflow with multigpu fix

    But for you I put distorch_2.py to fix the problem. Read the instructions

    https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.9.1_cu130_cp313_sageattention_triton

    PatrickChenJan 4, 2026· 3 reactions
    CivitAI

    Great workflow,thank you. But I have this error, I tried many ways, still can't fix it:# ComfyUI Error Report ## Error Details - Node ID: 185 - Node Type: KSamplerAdvanced - Exception Type: RuntimeError - Exception Message: Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 48, 21, 72, 128] to have 36 channels, but got 48 channels instead.

    Stefan_Falkok
    Author
    Jan 4, 2026

    hi. First thing you need doing - update ComfUI-MultiGPU node. Secondly - i recomend dowload text encoder bf 16 from https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders. If you have this encoder - skip step. Maybe it helps for you

    Stefan_Falkok
    Author
    Jan 4, 2026

    also update ComfyUI

    PatrickChenJan 4, 2026

    @Stefan_Falkok Thanks for your reply. As you told,I updated ComfyUI, ComfUI-MultiGPU node and everything. And I already have this encoder, but I still have the same problem.

    Stefan_Falkok
    Author
    Jan 4, 2026

    @PatrickChen maybe you have problem with python environment

    Go to https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.9.1_cu130_cp313_sageattention_triton

    Download include.zip archive - all files from this archive put to ComfyUI/python_embeded/Include . I hope it will work good

    If it doesn't, so install my ComfyUI build, put include file in the same path, and update ComfyUI with custom nodes

    PatrickChenJan 5, 2026

    @Stefan_Falkok I did everything you said, but I still got same problem. Its really bother me.

    Stefan_Falkok
    Author
    Jan 5, 2026

    @PatrickChen what kind of models do you use in the workflow? Wrire models in unetloader, clip and vae

    PatrickChenJan 5, 2026

    Wan2_2-I2V-A14B-HIGH_fp8_e4m3fn_scaled_KJ.safetensors,wan_2.1_vae.safetensors,umt5_xxl_fp8_e4m3fn_scaled.safetensors. basically all of your recommend. Its so weird.

    Stefan_Falkok
    Author
    Jan 5, 2026

    @PatrickChen aaah, i see the problem. Don't use fp8 clip, Use only fp16 clip from https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders and workflow will work

    PatrickChenJan 5, 2026

    @Stefan_Falkok I tried both clip, FP16 and FP8, still got same problem.

    Stefan_Falkok
    Author
    Jan 5, 2026

    @PatrickChen oh. Then it's in Wan model problem maybe( Try use another wan model like https://civitai.com/models/2086218/wan-22-10-steps-t2v-and-i2v-fp8-gguf-q80-q4km-models - it should working

    PatrickChenJan 5, 2026· 1 reaction

    @Stefan_Falkok I tired Wan_2.2_I2V_HighNoise_10steps_fp8.safetensors/Wan_2.2_I2V_LowNoise_10steps_fp8.safetensors both download from your huggingface, also GGUF, same problem again. Now, I‘m going to completely re-install my comfyui, hope It will fix.

    PatrickChenJan 5, 2026

    @Stefan_Falkok Fuck it! I fully re-installed comfyui based on your build, still not work, exactly same problem. Perhaps your excellent workflow just won't run on my computer. 😂😂😂

    Stefan_Falkok
    Author
    Jan 5, 2026

    @PatrickChen soon I'll update workflow. but firsly - download my workflows from flux2 or qwen image 2512 and read instruction about distorch_2.py - maybe it can help

    PatrickChenJan 6, 2026· 1 reaction

    @Stefan_Falkok other workflows of your, working great on my computer, also I already fixed distorch_2.py issue, just wan2.2 v2v correct.json has this error.

    Stefan_Falkok
    Author
    Jan 6, 2026· 1 reaction

    @PatrickChen then just waiting when I'll update workflow. Have fun

    PatrickChenJan 6, 2026

    @Stefan_Falkok thanks

    ZELECTRONICJan 7, 2026· 2 reactions
    CivitAI

    Great Work! Огромное спасибо за такую проделанную работу, где все описано так подробно! Уверен, что ты из русскоговорящих

    Stefan_Falkok
    Author
    Jan 8, 2026

    приятно такое читать)

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    2,243
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/17/2025
    Updated
    5/4/2026
    Deleted
    -

    Files