CivArchive
    StartEndFrames simple workflow WAN2.1 | GGUF | LoRA | UPSCALE | TeaCache - v1.1
    Preview undefined
    Preview undefined

    馃搨Files :

    Recommendation :
    >24 gb Vram: base or Q8_0
    16 gb Vram: Q5_K_S
    <12 gb Vram: Q4_K_S

    For base version
    VACE Model: wan2.1_vace_14B_fp8_e4m3fn.safetensors or wan2.1_vace_1.3B_fp16.safetensors
    In models/diffusion_models

    CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
    in models/clip

    For GGUF version
    VACE Quant Model: Wan2.1-VACE-14B-QX_0.gguf
    In models/diffusion_models

    Quant CLIP: umt5-xxl-encoder-QX.gguf
    in models/clip

    VAE: wan_2.1_vae.safetensors
    in models/vae

    ANY upscale model (depreciated):

    in models/upscale_models

    馃摝Custom Nodes :

    Description

    New text encoder loader, fix on seed node.

    FAQ

    Comments (28)

    vslinxMar 23, 20251 reaction
    CivitAI

    The Image Saver-Node is missing in the Description and sadly for some reason i can't use it. The Import always fails even though comfyui is on the newest version and i'm using the newest version of the node :/

    It's easy enough to swap out the seed node with just the default generic one so it was easy to fix :P
    But maybe others will face the same issue.

    And sadly when trying to generate with it, i'm getting this Error:

    WanVideoSEImageClipEncode shape '[1, 19, 4, 96, 64]' is invalid for input of size 485376<comfy.clip_vision.ClipVisionModel object at 0x000001D5A9BBBCD0> !!! Exception during processing !!! shape '[1, 19, 4, 96, 64]' is invalid for input of size 485376 Traceback (most recent call last): File "G:\pinokio\api\comfy.git\app\execution.py", line 327, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "G:\pinokio\api\comfy.git\app\execution.py", line 202, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "G:\pinokio\api\comfy.git\app\execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "G:\pinokio\api\comfy.git\app\execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) File "G:\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-WanVideoStartEndFrames\nodes.py", line 586, in process mask = mask.view(1, mask.shape[1] // 4, 4, lat_h, lat_w) RuntimeError: shape '[1, 19, 4, 96, 64]' is invalid for input of size 485376

    Image i'm trying to generate is at 512x768 Resolution while the imput images are 4096x6144

    This does not generate any issues in the usual workflows. The issue happens inside the "Clip Encode" Node :/

    UmeAiRT
    Author
    Mar 23, 20252 reactions

    Thanks for the test, I modified the node for the seed and changed the text encoder loader.

    vslinxMar 23, 2025

    @UmeAiRT聽It works great now! I've just posted my first results from 3 different Start & End-Frame videos.
    Sometimes it takes 2-3 Videos to get a good result, sometimes the videos don't end with the correct end-frame but i'm sure that's entirely on the technology and not the workflow 馃

    The only thing that's weird, is that the previews for the videos do not work. I can wait until the whole process is done and the videos will be in my output folder, but the videos do not show as previews in the nodes like they do in the other workflows, maybe one of the outputs is not connected or something 馃槀

    Have you managed to get triton/sageattention to work with this workflow?
    If i set the attention_mode ot sageattn and un-bypass the Triton node in the "Model preperation"-Group i'm getting this Error in the WanVideoSEModelLoader-Node:

    must be called with a dataclass type or instance

    2025-03-23T22:35:12.078157 - !!! Exception during processing !!! must be called with a dataclass type or instance 2025-03-23T22:35:12.086157 - Traceback (most recent call last): File "G:\pinokio\api\comfy.git\app\execution.py", line 327, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "G:\pinokio\api\comfy.git\app\execution.py", line 202, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "G:\pinokio\api\comfy.git\app\execution.py", line 174, in _map_node_over_list process_inputs(input_dict, i) File "G:\pinokio\api\comfy.git\app\execution.py", line 163, in process_inputs results.append(getattr(obj, func)(**inputs)) File "G:\pinokio\api\comfy.git\app\custom_nodes\ComfyUI-WanVideoStartEndFrames\nodes.py", line 362, in loadmodel patcher.model.diffusion_model.blocks[i] = torch.compile(block, fullgraph=compile_args["fullgraph"], dynamic=compile_args["dynamic"], backend=compile_args["backend"], mode=compile_args["mode"]) File "G:\pinokio\api\comfy.git\app\env\lib\site-packages\torch\__init__.py", line 2447, in compile return torch._dynamo.optimize( File "G:\pinokio\api\comfy.git\app\env\lib\site-packages\torch\_dynamo\eval_frame.py", line 716, in optimize return _optimize(rebuild_ctx, *args, **kwargs) File "G:\pinokio\api\comfy.git\app\env\lib\site-packages\torch\_dynamo\eval_frame.py", line 790, in _optimize
    UmeAiRT
    Author
    Mar 24, 20251 reaction

    @vslinx聽For the preview I had disabled it to save VRAM and I forgot to put it back. You can right click on each recording node and choose "show preview"

    vslinxMar 24, 2025

    @UmeAiRT聽Aaaaaaah! that makes sense! Thank you 馃鈾ワ笍

    Miss_NoahMar 23, 20252 reactions
    CivitAI

    Getting the following error while trying to use this workflow:
    WanVideoSEImageClipEncode

    shape '[1, 7, 4, 72, 72]' is invalid for input of size 160704

    UmeAiRT
    Author
    Mar 23, 20251 reaction

    Thanks for the test, I changed the text encoder loader in 1.1.

    ZweiReh118Mar 23, 2025

    @UmeAiRT聽I get a similar error in 1.1:

    WanVideoSEImageClipEncode

    shape '[1, 13, 4, 80, 80]' is invalid for input of size 352000

    Looks like an interesting workflow though as I guess you could do looping animations with it?

    Miss_NoahMar 23, 2025

    @UmeAiRT聽Still getting the same error unfortunately

    dlfoid23May 7, 2025

    @UmeAiRT聽shape '[1, 25, 4, 75, 75]' is invalid for input of size 579375
    聽I get a same error in 2.1

    dlfoid23May 17, 2025

    @UmeAiRT聽any update on this error?

    xxxtembelMar 24, 2025
    CivitAI

    Just one small bug, seems like, not showing ressults on workflow . Otherwise working like a charm.. Tutti Frutii .. Thanx you !!! GPU 16 GB + RAM 32 GB.. Model I2V-14B-480P_fp8_e4m3fn.safetensors with quantization - fp8_e4m3fn + sageattn.. Model to offload, Text encoder and Clip to main_device, with my PC working this configuration.. I am use win 10 + comfyui portable

    xxxtembelMar 24, 2025

    and there is maybe, better swap to order with upscale and interpolation , first upscale and only after interpolate videos, result is better

    UmeAiRT
    Author
    Mar 24, 2025

    For the preview I had disabled it to save VRAM and I forgot to put it back. You can right click on each recording node and choose "show preview"

    UmeAiRT
    Author
    Mar 24, 2025

    @xxxtembel聽I inverted it at the request of users because the interpolation after upscale requires a lot of VRAM. But I also published the interpolation and upscale in separate workflows

    xxxtembelMar 24, 2025

    @UmeAiRT聽thanx fixed

    xxxtembelMar 24, 2025

    @UmeAiRT聽I understand, I'll will swap it for myself, I don't use upscale to more than 0.4

    NoneofYourBusiness2Mar 24, 20253 reactions
    CivitAI

    Works great, thanks!

    UmeAiRT
    Author
    Mar 24, 2025

    Thank you! Are you using the exact models suggested or have you made any changes?

    @UmeAiRT聽I downloaded the suggested ones. Got errors with other models.

    UmeAiRT
    Author
    Mar 25, 2025

    @NoneofYourBusiness2聽Ok thanks

    tosolaMar 25, 2025
    CivitAI

    I don't think the WanVideoWrapper node will support gguf in a short time.

    ree89Mar 26, 2025
    CivitAI

    My configuration is 4070tis 16g and I can't run fp16 so I'm using fp8 to generate video that's blurry in the middle and indistinct but the first and last frames are clear and I can't control the exact frame rate in the workflow just by the yellow nodes

    MaestroSApr 2, 2025
    CivitAI

    Is this compatible with GGUF?

    UmeAiRT
    Author
    Apr 2, 2025

    Not at the moment, I have to update it for that

    p1042779030337Apr 2, 2025
    CivitAI

    If it's not compatible with GGUF why does it say GGUF in the title ?

    Don't get me wrong, you're doing a great job, just fix this few things.

    UmeAiRT
    Author
    Apr 2, 2025

    The first version was GGUF but the results were very bad and I forgot to remove GGUF from the title sorry for that

    xuanwoaApr 5, 2025
    CivitAI

    Please update the GGUF version, thank you.

    Workflows
    Wan Video

    Details

    Downloads
    879
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/23/2025
    Updated
    5/16/2026
    Deleted
    -

    Files

    startendframesSimpleWorkflowWAN2_v11.zip