CivArchive
    WAN 2.2/2.1 - I2V/FLF2V - 2 workflows merge, FusionX lora, 2 sampler + Florence Caption, last frame, Color match - v1.1 (WAN 2.1 I2V)
    NSFW
    Preview 82948816

    I found 2 WF that i liked and decide to put them together:

    I dont know what settings is right to use (cfg/steps/ lora strength). But it seems to be working as it is for now.

    I use Sage Attention - LINK

    UPDATE COMFYUI BEFORE USE

    ==========

    v.1.0 (WAN 2.2 FLF2V)

    WAN 2.2 has FLF2V (First-Last Frame to Video) native capabilities. So i tried to adjust my workflow to make it work and seems it does. Hope you like it.

    Enjoy.

    ==========

    v.1.0 (WAN 2.2 I2V)

    I just rearange nodes so it works with WAN 2.2 GGUF

    https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main

    Notice that for 14B model you need both models - HIGH noise and LOW noise

    ==========

    v.1.1 (WAN 2.1 I2V)

    • Instead using CausVid lora i used FusionX lora which is already have CausVid

    Wan2.1_I2V_14B_FusionX_LoRA

    • I set lora streinght 0.4 in first Ksampler (HIGH CFG START) and 0.8 in second one (LOW CFG END)

    • In version 1.0 you can just change loras in Dual samplers group

    ==========

    v.1.0 (WAN 2.1 I2V)

    1 - CausVid 2 Sampler Workflow for Wan 480p/720p I2V (Main part)

    I Used this lora: Wan21_CausVid_14B_T2V_lora_rank32.safetensors

    2 - WAN 2.1 IMAGE to VIDEO with Caption and Postprocessing (Florence Caption, last frame, Color match)

    ==========

    I saw suggestions that dpmpp_2m - normal works good.

    There is dpmpp_2m - simple in this WF

    Description

    Instead Caus i use FusionX lora

    FAQ

    Comments (35)

    blobby99Jun 18, 2025· 1 reaction
    CivitAI

    With accelerators, CFG needs to be 1, LoRAs usually need to be higher than usual, and steps will be between 6 and 10 usually. Taking CFG above 1 usually doubles render time. Again too many steps and the advantage of accelerating methods is reduced.

    PS accelerators ruin prompt adherence, but that goes with the territory. One must experiment to find which types of prompt are still followed. BUT controlnets can mitigate this issue by bringing motion from an existing video.

    GFrost
    Author
    Jun 18, 2025

    Thnx, i just like the result of this WF. No pixelization, no random contrast/brightness jumps. It take ~15 min to render with 20 steps on 3080 Ti. I use Sage . I didnt add TeaCache in it. I think it lower hands quality. But i'm still trying things.

    satangelJun 19, 2025· 1 reaction
    CivitAI

    KSamplerAdvanced

    mat1 and mat2 shapes cannot be multiplied (231x768 and 4096x5120)

    may i know how to fix it?

    GFrost
    Author
    Jun 19, 2025

    @satangel Hello. I didn't have that issue yet. But i found some toppics on reddit they say it may cause by lora or other models that not comatible. Do you use any lora besides FusionX? Or you changed some other models in WF? Its hard to say cus your screen not showing what you have in nodes.

    satangelJun 19, 2025

    got prompt

    Using xformers attention in VAE

    Using xformers attention in VAE

    VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16

    Requested to load CLIPVisionModelProjection

    loaded completely 9.5367431640625e+25 1208.09814453125 True

    Requested to load SD1ClipModel

    loaded completely 9.5367431640625e+25 235.84423828125 True

    CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16

    clip missing: ['text_model.embeddings.token_embedding.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.final_layer_norm.weight', 'text_model.final_layer_norm.bias', 'text_projection.weight']

    gguf qtypes: F16 (693), Q3_K (489), F32 (149)

    model weight dtype torch.float16, manual cast: None

    model_type FLOW

    [DisTorch] Full allocation string: #cuda:0;24.0;cpu

    Token indices sequence length is longer than the specified maximum sequence length for this model (333 > 77). Running this sequence through the model will result in indexing errors

    Requested to load WanVAE

    loaded completely 2658.0 242.02829551696777 True

    lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.0.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.0.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.0.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.0.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.0.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.1.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.1.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.1.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.1.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.1.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.1.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.1.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.10.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.10.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.10.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.10.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.10.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.10.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.10.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.11.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.11.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.11.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.11.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.11.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.11.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.11.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.12.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.12.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.12.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.12.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.12.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.12.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.12.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.13.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.13.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.13.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.13.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.13.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.13.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.13.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.14.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.14.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.14.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.14.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.14.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.14.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.14.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.15.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.15.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.15.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.15.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.15.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.15.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.15.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.16.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.16.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.16.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.16.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.16.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.16.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.16.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.17.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.17.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.17.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.17.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.17.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.17.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.17.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.18.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.18.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.18.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.18.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.18.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.18.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.18.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.19.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.19.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.19.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.19.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.19.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.19.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.19.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.2.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.2.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.2.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.2.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.2.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.2.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.2.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.20.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.20.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.20.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.20.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.20.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.20.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.20.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.21.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.21.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.21.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.21.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.21.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.21.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.21.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.22.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.22.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.22.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.22.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.22.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.22.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.22.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.23.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.23.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.23.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.23.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.23.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.23.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.23.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.24.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.24.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.24.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.24.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.24.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.24.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.24.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.25.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.25.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.25.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.25.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.25.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.25.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.25.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.26.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.26.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.26.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.26.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.26.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.26.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.26.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.27.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.27.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.27.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.27.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.27.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.27.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.27.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.28.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.28.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.28.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.28.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.28.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.28.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.28.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.29.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.29.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.29.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.29.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.29.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.29.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.29.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.3.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.3.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.3.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.3.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.3.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.3.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.3.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.30.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.30.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.30.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.30.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.30.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.30.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.30.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.31.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.31.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.31.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.31.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.31.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.31.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.31.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.32.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.32.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.32.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.32.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.32.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.32.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.32.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.33.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.33.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.33.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.33.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.33.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.33.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.33.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.34.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.34.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.34.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.34.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.34.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.34.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.34.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.35.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.35.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.35.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.35.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.35.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.35.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.35.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.36.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.36.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.36.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.36.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.36.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.36.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.36.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.37.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.37.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.37.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.37.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.37.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.37.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.37.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.38.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.38.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.38.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.38.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.38.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.38.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.38.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.39.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.39.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.39.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.39.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.39.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.39.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.39.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.4.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.4.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.4.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.4.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.4.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.4.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.4.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.5.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.5.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.5.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.5.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.5.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.5.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.5.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.6.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.6.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.6.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.6.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.6.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.6.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.6.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.7.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.7.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.7.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.7.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.7.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.7.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.7.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.8.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.8.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.8.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.8.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.8.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.8.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.8.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.9.cross_attn.k_img.diff_b

    lora key not loaded: diffusion_model.blocks.9.cross_attn.k_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.9.cross_attn.k_img.lora_up.weight

    lora key not loaded: diffusion_model.blocks.9.cross_attn.norm_k_img.diff

    lora key not loaded: diffusion_model.blocks.9.cross_attn.v_img.diff_b

    lora key not loaded: diffusion_model.blocks.9.cross_attn.v_img.lora_down.weight

    lora key not loaded: diffusion_model.blocks.9.cross_attn.v_img.lora_up.weight

    lora key not loaded: diffusion_model.img_emb.proj.0.diff

    lora key not loaded: diffusion_model.img_emb.proj.0.diff_b

    lora key not loaded: diffusion_model.img_emb.proj.1.diff_b

    lora key not loaded: diffusion_model.img_emb.proj.1.lora_down.weight

    lora key not loaded: diffusion_model.img_emb.proj.1.lora_up.weight

    lora key not loaded: diffusion_model.img_emb.proj.3.diff_b

    lora key not loaded: diffusion_model.img_emb.proj.3.lora_down.weight

    lora key not loaded: diffusion_model.img_emb.proj.3.lora_up.weight

    lora key not loaded: diffusion_model.img_emb.proj.4.diff

    lora key not loaded: diffusion_model.img_emb.proj.4.diff_b

    Requested to load WAN21_Vace

    0 models unloaded.

    loaded partially 128.0 127.9981689453125 0

    ===============================================

    DisTorch Virtual VRAM Analysis

    ===============================================

    Object Role Original(GB) Total(GB) Virt(GB)

    -----------------------------------------------

    cuda:0 recip 8.00GB 32.00GB +24.00GB

    cpu donor 63.84GB 39.84GB -24.00GB

    -----------------------------------------------

    model model 7.29GB 0.00GB -24.00GB

    Warning: Model size is greater than 90% of recipient VRAM. 0.09 GB of GGML Layers Offloaded Automatically to Virtual VRAM.

    Allocation String cuda:0,0.0000;cpu,0.3759

    ===============================================

    DisTorch Device Allocations

    ===============================================

    Device Alloc % Total (GB) Alloc (GB)

    -----------------------------------------------

    cuda:0 0% 8.00 0.00

    cpu 37% 63.84 24.00

    -----------------------------------------------

    DisTorch GGML Layer Distribution

    -----------------------------------------------

    Layer Type Layers Memory (MB) % Total

    -----------------------------------------------

    Conv3d 2 8.79 0.1%

    Linear 495 7455.85 99.8%

    LayerNorm 145 0.94 0.0%

    RMSNorm 192 1.88 0.0%

    -----------------------------------------------

    DisTorch Final Device/Layer Assignments

    -----------------------------------------------

    Device Layers Memory (MB) % Total

    -----------------------------------------------

    cuda:0 0 0.00 0.0%

    cpu 834 7467.45 100.0%

    -----------------------------------------------

    0%| | 0/4 [00:00<?, ?it/s]

    !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (154x768 and 4096x5120)

    Traceback (most recent call last):

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\execution.py", line 361, in execute

    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\execution.py", line 236, in get_output_data

    return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\execution.py", line 208, in mapnode_over_list

    process_inputs(input_dict, i)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\execution.py", line 197, in process_inputs

    results.append(getattr(obj, func)(**inputs))

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\nodes.py", line 1550, in sample

    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\nodes.py", line 1483, in common_ksampler

    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\sample.py", line 45, in sample

    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 1139, in sample

    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 1029, in sample

    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 1014, in sample

    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\patcher_extension.py", line 111, in execute

    return self.original(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 982, in outer_sample

    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 965, in inner_sample

    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\patcher_extension.py", line 111, in execute

    return self.original(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 744, in sample

    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context

    return func(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\k_diffusion\sampling.py", line 741, in sample_dpmpp_2m

    denoised = model(x, sigmas[i] s_in, *extra_args)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 396, in call

    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 945, in call

    return self.predict_noise(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 948, in predict_noise

    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 376, in sampling_function

    out = calc_cond_batch(model, conds, x, timestep, model_options)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch

    return executor.execute(model, conds, x_in, timestep, model_options)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\patcher_extension.py", line 111, in execute

    return self.original(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\samplers.py", line 325, in calccond_batch

    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\model_base.py", line 151, in apply_model

    return comfy.patcher_extension.WrapperExecutor.new_class_executor(

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\patcher_extension.py", line 111, in execute

    return self.original(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\model_base.py", line 189, in applymodel

    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\module.py", line 1751, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\module.py", line 1762, in callimpl

    return forward_call(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\ldm\wan\model.py", line 563, in forward

    return self.forward_orig(x, timestep, context, clip_fea=clip_fea, freqs=freqs, transformer_options=transformer_options, **kwargs)[:, :, :t, :h, :w]

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\ldm\wan\model.py", line 662, in forward_orig

    context = self.text_embedding(context)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\module.py", line 1751, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\module.py", line 1762, in callimpl

    return forward_call(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\container.py", line 240, in forward

    input = module(input)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\module.py", line 1751, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\nn\modules\module.py", line 1762, in callimpl

    return forward_call(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\ops.py", line 84, in forward

    return self.forward_comfy_cast_weights(*args, **kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 217, in forward_comfy_cast_weights

    out = super().forward_comfy_cast_weights(input, args, *kwargs)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\ComfyUI\comfy\ops.py", line 80, in forward_comfy_cast_weights

    return torch.nn.functional.linear(input, weight, bias)

    File "D:\AI-T8-video-onekey-20250615\AI-T8-video-onekey-20250615\Python\lib\site-packages\torch\_tensor.py", line 1668, in __torch_function__

    ret = func(*args, **kwargs)

    RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 4096x5120)

    Prompt executed in 47.90 seconds

    GFrost
    Author
    Jun 19, 2025· 1 reaction

    @satangel I reproduced your error. The issiue is the CLIP.

    Try to use umt5_xxl_fp8_e4m3fn_scaled.safetensors instead of umt5-xxl-enc-fp8_e4m3fn.safetensors

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders

    satangelJun 20, 2025· 1 reaction

    @GFrost  Many Thanks, it finally working now after i change the scaled model.

    skyrimer3dJun 19, 2025· 2 reactions
    CivitAI

    Checked this and the results were very impressive. I used "switch to own prompt =TRUE" though, results were very random sometimes with it off.

    GFrost
    Author
    Jun 19, 2025

    Thnx, glad you liked it.

    Yeah it meant to be helper. I do use own prompt sometimes with slightly edit of autogenerated one.

    You can also use lesser florence model. it will describe small portion of the picture so you can add more later.

    Does Results were random even if you add "After text" or/and "Pre text" ?

    skyrimer3dJun 21, 2025· 1 reaction

    @GFrost I'd have to try that too and see what happens.

    skyrimer3dJun 21, 2025· 1 reaction

    @GFrost You're right, the first time i added my prompt in "my own prompt" and the results were random, adding it to pre text created a great prompt with just "girl is talking on smartphone", sorry for that, i'm not used to Florence, you learn something new every day lol

    GFrost
    Author
    Jun 21, 2025· 1 reaction

    @skyrimer3d True. I found florence in one of the WF lately and using it ever since.
    Have fun =)

    satangelJun 21, 2025· 1 reaction
    CivitAI

    https://ibb.co/pvqYrYSz

    [url=https://ibb.co/pvqYrYSz][img]https://i.ibb.co/pvqYrYSz/image-4.png[/img][/url]

    how come after interpolate the video quality seems become less in quality like the colour fading? is it anyway to improve it? is it suppose to upscale first or what?

    GFrost
    Author
    Jun 21, 2025· 1 reaction

    The color match node taking original colors from your picture and apply them to the interpolate output and last frame output. You can lower strength of the color match node or bypass it completely.

    satangelJun 22, 2025· 1 reaction

    @GFrost  thanks ^^

    skyrimer3dJun 24, 2025· 1 reaction
    CivitAI

    I've checked all wfs i can find on civitai, even the latest ones with self forcing etc, nothing beats the amazing results and color matching of your wf (what is the fuss with self forcing, motion is terrible over 5 sec), congrats! I wanted to ask, ccan you make a wf based on this with first frame and last frame? I don't have a lot of buzz but i can send some.

    GFrost
    Author
    Jun 24, 2025

    Thanks. I Just combine 2 WF together basicly =) and its first one. Im not a great builder of WF but i will see what i can do in spare time.

    P.S. Do not Buzz in advance i might fail and it might took time.

    This WF has lazy version of your request - Last frame (The last node) . You can put it in Sourse image and continue from that. Sadly that quality slightly decreasing each itteration.

    skyrimer3dJun 25, 2025· 1 reaction

    @GFrost Great to hear thanks! I read in the wf comments that you can send the last frame back to the wf which is great, but being able to use a first to last frame transition would me much better indeed.

    LynMSJun 28, 2025· 5 reactions
    CivitAI

    The results are good, but it took 20 minutes to render a 7-second video (125 length) on 4080. Why use Fusion x Lora if workflows need more than 8 steps? Dont know any logic behind that just asking. Thnx for the workflow btw.

    skyrimer3dJun 30, 2025· 2 reactions
    CivitAI

    For anyone interested, i'm combining this wf which for me it's the best image quality in any wan wf, with this lora https://civitai.com/models/1713337/wan-self-forcing-rank-16-accelerator , i set both loras to strength to 1, sampler - euler, scheduler - beta, cfg 1, steps 4, i'm getting similar quality to the original wf but 5 sec vids take 1-2 min, give it a try.

    GFrost
    Author
    Jul 1, 2025· 1 reaction

    Woah! i will give it it try. =)
    Thnx!

    EshinioJul 14, 2025

    Are you putting that accelerator Lora in the "Power Lora Loader" at the start, or in the "LoraModelLoaderOnly" in the Dual Samplers section, replacing the FusionX Loras?
    Also, do you keep the "End at step" at 4 and are you setting the CFG and Steps the same in both KSamplers?

    mrrocko101269Jul 15, 2025

    @Eshinio Lol you ever figure this out?

    GFrost
    Author
    Jul 15, 2025

    When i tryed i replaced fusionX with his lora and set same parameters in both. But in the end i returned to fusionx. It slower but results more satisfying for me.

    3cchi33v33Jul 15, 2025· 1 reaction
    CivitAI

    what motion lora did you use?

    GFrost
    Author
    Jul 15, 2025

    Hi, FusionX.

    slowmindJul 23, 2025· 1 reaction
    CivitAI

    Привет! При открытии часть недостающих узлов через ComfyUI Manager установились, но всё ещё не хватает этих

    "When loading the graph, the following node types were not found

    Text Find and Replace

    Text Concatenate

    String

    FinalFrameSelector

    UnetLoaderGGUFDisTorchMultiGPU

    Switch any [Crystools]"

    Может, есть идеи что я делаю не так?

    GFrost
    Author
    Jul 23, 2025

    Привет!

    Странно, у меня все поставилось без проблемов
    поискал в кастомных нодах

    * Text Find and Replace
    вот это WAS Node Suite (не Revised) В менеджере

    * Text Concatenate

    опять же в WAS Node Suite

    * String

    У меня сейчас установлена версия 1.2.9. -В менеджере - ComfyUI-Easy-Use

    *FinalFrameSelector

    В менеджере MediaMixer

    *UnetLoaderGGUFDisTorchMultiGPU

    в менеджере - ComfyUI-MultiGPU

    Switch any [Crystools]"

    В менеджере ComfyUI-Crystools

    slowmindJul 24, 2025

    GFrost, к сожалению, получилось поставить всё кроме WAS Node Suite, в менеджере на нем висит "IMPORT FAILED" и такая ошибка

    Error message occurred while importing the 'WAS Node Suite' module.


    D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py:7355: SyntaxWarning: invalid escape sequence '\o'
      if output_path.endswith("ComfyUI/output") or output_path.endswith("ComfyUI\output"):
    Traceback (most recent call last):
      File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2124, in load_custom_node
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 999, in exec_module
      File "<frozen importlib._bootstrap>", line 488, in callwith_frames_removed
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\__init__.py", line 1, in <module>
        from .WAS_Node_Suite import NODE_CLASS_MAPPINGS
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 14678, in <module>
        build_info = ' '.join(cv2.getBuildInformation().split())
                              ^^^^^^^^^^^^^^^^^^^^^^^
    AttributeError: module 'cv2' has no attribute 'getBuildInformation'

    UPD. Смог найти альтернативу узлу Text Concatenate, а вот подобных Text Find and Replace не нашлось. Пришлось просто удалить их и пустить в обход. Но тут наткнулся на проблему уже с FILM VFI, ошибка PytorchStreamReader failed reading zip archive: failed finding central directory.

    Может быть сам ComfyUI как-то криво поставил.

    В любом случае, хорошая работа, даже отключив часть результат мне нравится гораздо больше чем в других рабочих процессах что я пробовал, надеюсь у меня еще получится опробовать его в полной мере.

    GFrost
    Author
    Jul 24, 2025

    slowmind Я почитал оригинал WAS все, типа ушел. Я не пробовал но может надо WAS revised поставить и те две ноды на ихние заманить

    slowmindJul 25, 2025

    GFrost да revised, тоже не ставится почему-то, пробовал

    GFrost
    Author
    Jul 25, 2025

    slowmind,
    Чёрть, не пойму тогда в чем ошибка =(
    Может надо обновить Комфи? Я вот использую портабл версию и периодически жмаю там update .
    Версия которой я пользуюсь в описании, где Sage Attention

    slowmindJul 25, 2025· 1 reaction

    GFrost вновь попробовал поставить Revise и узлы заработали! Мне осталось лишь понять как починить Frame Interpolation. Огромное спасибо за помощь

    UPD. Получилось, теперь всё работает идеально

    Workflows
    Wan Video 14B i2v 480p

    Details

    Downloads
    1,354
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/18/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    wan2221I2VFLF2V2WorkflowsMerge_v11WAN21I2V.zip