CivArchive
    WAN VACE Clip Joiner - Smooth AI video transitions for Wan, LTX-2, Hunyuan, and any other video source - Lightweight v1.0.2
    NSFW

    Github | Civitai


    New feature: seamless looping


    ComfyUI Frontend Compatibility Notice

    Affected versions: ComfyUI_frontend 1.40.x – 1.42.9 (known good: <= 1.39.19 or >= 1.42.10)

    Recent ComfyUI frontend updates have introduced significant issues with subgraph functionality that affect this workflow.

    If you are affected, this message appears in your ComfyUI console right after you start a workflow run:

    Failed to validate prompt for output 499: 
    * ColorMatch 587:586: 
     - Required input is missing: image_target 
    * Basic data handling: IfElse 598: 
     - Required input is missing: if_false

    The workflow may appear to run correctly, but only parts of it will actually produce output. It won't finish with a properly joined video.

    If you see this warning and the workflow isn't running as expected, downgrade your ComfyUI frontend to 1.39.19 or upgrade to 1.42.10, and reload a fresh copy of the workflow.


    What it Does

    Point this workflow at a directory of clips and it will automatically stitch them together. It's designed to work well with a few clips or dozens. At each transition, Wan VACE generates new frames guided by context on both sides, replacing the seam with motion that flows naturally between the clips. Noisy or artifacted frames at clip boundaries get replaced in the same pass. How many context frames and generated frames are used is configurable.

    The workflow runs with either Wan 2.1 VACE or Wan 2.2 Fun VACE. Input clips can come from anywhere - Wan, LTX-2, phone footage, stock video, whatever you have.

    If you want the result to loop cleanly, there's a toggle for that.

    Usage

    1. Put your input clips in their own directory, named so they sort in the order you want them joined.

    2. Configure the workflow parameters. The notes in the workflow have full details on each one.

    3. Set the index to 0.

    4. Queue the workflow. You need to queue it once per transition. That's N-1 times for N clips, or N times if looping is enabled.

    Setup

    This is not a ready to run workflow. You need to configure it to fit your system.

    What runs well on my system will not necessarily run well on yours. Configure this workflow to use a VACE model of the same type that you use in your standard Wan workflow. Detailed configuration and usage instructions can be found in the workflow. Please read carefully.

    Dependencies

    I've used native nodes and tried to keep the custom node dependencies to a minimum. The following packages are required. All of them are installable through the Manager.

    Note: I have not tested this workflow under the new Nodes 2.0 UI.

    Configuration and Models

    You'll need some combination of these models to run the workflow. As already mentioned, this workflow will not run properly on your system until you configure it properly. You probably already have a Wan video generation workflow that runs well on your system. You need to configure this workflow similarly to your generation workflow.

    The Sampler subgraph contains KSampler nodes and model loading nodes. Inference is isolated in subgraphs, so it should be easy to modify this workflow for your preferred setup. Replace the provided sampler subgraph with one that implements your setup, then plug it into the workflow. Have your way with these until it feels right to you.

    Just make sure all the subgraph inputs and outputs are correctly getting and setting data, and crucially, that the diffusion model you load is one of Wan2.2 Fun VACE or Wan2.1 VACE. GGUFs work fine, but non-VACE models do not. An example alternate sampler subgraph for VACE 2.1 is included.

    Enable sageattention and torch compile if you know your system supports them.

    Troubleshooting

    • The size of tensor a must match the size of tensor b at non-singleton dimension 1 - Check that both dimensions of your input videos are divisible by 16 and change this if they're not. Fun fact: 1080 is not divisible by 16!

    • Brightness/color shift - VACE can sometimes affect the brightness or saturation of the clips it generates. I don't know how to avoid this tendency, I think it's baked into the model, unfortunately. Disabling lightx2v speed loras can help, as can making sure you use the exact same lora(s) and strength in this workflow that you used when generating your clips. Some people have reported success using a color match node before output of the clips in this workflow. I think specific solutions vary by case, though. The most consistent mitigation I have found is to interpolate framerate up to 30 or 60 fps after using this workflow. The interpolation decreases how perceptible the color shift is. The shift is still there, but it's spread out over 60 frames instead over 16, so it doesn't look like a sudden change to our eyes any more.

    • Regarding Framerate - The Wan models are trained at 16 fps, so if your input videos are at some higher rate, you may get sub-optimal results. At the very least, you'll need to increase the number of context and replace frames by whatever factor your framerate is greater than 16 fps in order to achieve the same effect with VACE. I suggest forcing your inputs down to 16 fps for processing with this workflow, then re-interpolating back up to your desired framerate.

    • IndexError: list index out of range - Your input video may be too small for the parameters you have specified. The minimum size for a video will be (context_frames + replace_frames) * 2 + 1. Confirm that all of your input videos have at least this minimum number of frames.

    • If you can't make the workflow work, update ComfyUI and try again. If you're not willing to update ComfyUI, I can't help you. We have to be working from the same starting point.

    • Feel free to open an issue on github. This is the most direct way to engage me. If you want a head start, paste your complete console log from a failed run into your issue.


    Changelog

    • v2.5

      • Seamless Loops - Enable the Make Loop toggle and the workflow will generate a smooth transition between your final input video and the first one, allowing the video to be played on a loop.

      • Much lower RAM usage during final assembly - Enabled by default, VideoHelperSuite's Meta Batch Manager drastically reduces the amount of system RAM consumed while concatenating frames. If you were running out of RAM on the final step because you were joining hundreds or thousands of frames, that shouldn't be a problem any more. Additional details in the workflow notes.

    • v2.4 Minor tweaks. Adjust sage attention, torch compile defaults.

    • v2.3 This release prioritizes workflow reliability and maintainability. Core functionality remains unchanged. These changes reduce surface area for failures and improve debuggability. Stability and deterministic operation take priority over convenience features.

      • Looping workflow discontinued – While still functional, the loop-based approach obscured workflow status and complicated targeted reruns for specific transitions. The batch workflow provides better visibility and control.

      • Reverted to lossless fv1 intermediate files – The 16-bit PNG experiment provided no practical benefit and made addressing individual joins more cumbersome. Returning to the proven method.

      • New custom nodes for cleaner workflowsWAN VACE Prep Batch and VACE Batch Context encapsulate operations that are awkward to express in visual nodes but straightforward in Python. Load Videos From Folder (simple) replaces the KJNodes equivalent to eliminate problematic VideoHelperSuite dependencies that fail in some environments.

      • Enhanced console logging – Additional diagnostic output when Debug=True to aid troubleshooting.

      • Fewer custom node dependencies

    • The Lightweight Workflow has moved to its own page. Check it out if you just need to quickly join two clips without the overhead required by the full workflow.

    • v2.2 Complexity Reduction Release

      • Removed fancy model loader which was causing headaches for safetensors users without any gguf models installed, and vice-versa.

      • Removed the MOE KSampler and TripleKSampler subgraphs. You can still use these samplers, but it's up to you to bring them and set them up.

      • Custom node dependencies reduced.

      • Un-subgraphed some functions. Sadly, this powerful and useful feature is still too unstable to distribute to users on varying versions of ComfyUI.

      • Updated documentation.

    • v2.1

      • Add Prune Outputs to Video Combine nodes, preventing extra frames from being added to the output

    • v2.0 - Workflow redesign. Core functionality is the same, but hopefully usability is improved

      • (Experimental) New looping workflow variant that doesn't require manual queueing and index manipulation. I am not entirely comfortable with this version and consider it experimental. The ComfyUI-Easy-Use For Loop implementation is janky and requires some extra, otherwise useless code to make it work. But it lets you run with one click! Use with caution. All VACE join features are identical between the workflows. Looping is the only difference.

      • (Experimental) Added cross fade at VACE boundaries to mitigate brightness/color shift

      • (Experimental) Added color match for VACE frames to mitigate brightness/color shift

      • Save intermediate work as 16 bit png instead of ffv1 to mitigate brightness/color shift

      • Integrated video join into the main workflow. It will run automatically after the last iteration. No more need to run the join part separately.

      • More documentation

      • Inputs and outputs are logged to the console for better progress tracking

    • v1.2 - Minor Update 2025-Oct-13

      • Sort the input directory list.

    • v1.1 - Minor Update 2025-Oct-11

      • Preserve input framerate in workflow VACE outputs. Previously, all output was forced to 16fps. Note, you must manually set the framerate in the Join & Save output.

      • Changed default model/sampler to Wan 2.2 Fun VACE fp8/KSampler. GGUF, MoE, 2.1 are still available in the bypassed subgraphs.

    Description

    • v1.0.2 Core custom node change.

      • The VACE node I began with wasn't quite flexible enough, so I moved to my own custom node instead. Apologies for the moving target and shifting custom node requirements. Now that the workflow uses my own custom node, things will be stabler.

    The lightweight workflow will continue to share a CivitAI page with the full workflow, but it will be distributed separately from now on.

    FAQ

    Comments (33)

    jeanalaincorre375Jan 3, 2026· 2 reactions
    CivitAI

    Merci pour le workflow, il fluidifie bien differents segments de video si l'on suit bien les indications présente dans le workflow.

    jeanalaincorre375Jan 14, 2026

    there is a way to increase nbre of frames to replace?

    __Bob__
    Author
    Jan 15, 2026

    @jeanalaincorre375 Please look at the notes in the workflow titled How To Use This Workflow and the Parameters section replace_frames.

    slrwndJan 11, 2026
    CivitAI

    Loop WF

    [For Loop End node]:
    IndexError: list index out of range

    Hey, can you help me? Tnx for your work!

    __Bob__
    Author
    Jan 13, 2026

    That message probably indicates that the list of your input files is empty. Please check that the input path you have specified is correct. It should be a full path to the directory where your files reside. e.g. C:\pictures\join\

    rumbleskinJan 13, 2026
    CivitAI

    I got this error, any ideas: StringConcatenate sequence item 1: expected str instance, NoneType found

    __Bob__
    Author
    Jan 13, 2026

    This means when the workflow is trying to join two strings, one of the strings is empty. Possibly your input path is incorrect, so the file list is empty. Double check that.

    If you continue to have trouble, please tell me where in the workflow this error is occurring.

    MiddlingMakerJan 18, 2026
    ComfyUI Error Report ## Error Details - **Node ID:** N/A - **Node Type:** N/A - **Exception Type:** Prompt execution failed - **Exception Message:** Prompt outputs failed validation: VHS_LoadImagesPath: - Exception when validating inner node: 'NoneType' object has no attribute 'strip' ## Stack Trace

    Same error... I'm sure my input path is correct. The workflow finds the clips and makes the images in a flatfile but errors on the recombine. Cannot for the life of me figure out where the string error s coming from.

    sid9000999Jan 26, 2026

    getting this same error. Input path is correct.

    __Bob__
    Author
    Jan 26, 2026

    To anyone experiencing this error in the final step of the workflow, please try this:

    Create a new workflow with just LoadImagesFromFolder and VideoCombine nodes. Put the full path to your vace-work folder in LoadImagesFromFolder, and give VideoCombine appropriate parameters. See whether that runs successfully.

    dft78750707Jan 14, 2026
    CivitAI

    I dont no nothing about coding, so I dont know if there is a load mechanism for the frames like loading a video. A few month ago, if you load a video with the "Load Video (Upload) 🎥🅥🅗🅢 " node you have a colour drift. If you use the "Load Video FFmpeg (Upload) 🎥🅥🅗🅢" there was no colour drift. Maybe use ffmpeg to load frames? if something like it even needed?

    MiddlingMakerJan 18, 2026· 1 reaction
    CivitAI

    I get an error

    # ComfyUI Error Report ## Error Details - Node ID: N/A - Node Type: N/A - Exception Type: Prompt execution failed - Exception Message: Prompt outputs failed validation: VHS_LoadImagesPath: - Exception when validating inner node: 'NoneType' object has no attribute 'strip' ## Stack Trace

    It also says > KJ Get/Set

    No SetNode found for name(GetNode). Most likely you're missing custom nodes

    but I Do have KJ nodes installed... tried uninstalling and reinstalling for sanity and same error.

    The workflow outputs the png frames to a file properly, and if I hardcore the directory with those frames into the video cominber it works, but I cant get the dynamic paths to work.

    __Bob__
    Author
    Jan 19, 2026

    That exception message comes from the VideoHelperSuite node Load Images (Path), which is used in the last step of the workflow to join the work files back together into the final video.

    There is apparently something wrong with your VideoHelperSuite installation. Have you tried updating VHS and/or ComfyUI?

    The Get/Set warning is probably harmless. You may be able to get rid of it by deleting any bypassed Sampler subgraphs that you aren't using.

    MiddlingMakerJan 19, 2026· 1 reaction

    Yeah everything is up to date. I also tried rolling back a couple versions each to see if any new updates had broken anything. The get/set warning was a false warning from my system turned out to be non-related.

    Load Images won't ever take a string into the directory node for some reason, it has to be either a simple string or just hardcode text in the field.

    It's truly a minor inconvenience. The workflow works brilliantly, I just run the three save nodes on their own workflow like the fallback suggests.

    MiddlingMakerJan 24, 2026· 1 reaction
    CivitAI

    For changing sampler steps, the default has high sampler on step 1, low sampler step 2-6. Is it recommended to keep low sampler 5x the steps of high? If I want more steps would I do say 2 high 10 low? Or is it more recommended to split them 50/50?

    __Bob__
    Author
    Jan 25, 2026

    Those are settings I have found to work well for me. I usually run a MoE KSampler or TripleKSampler, which automatically decide when to switch from high to low. With the settings used in this workflow (lightx2v, cfg=1, shift=5, 6 steps, t2v threshold), these samplers usually give the high model 1 or 2 steps, so that's what I use with a two sampler setup.

    I usually work with realistic video clips generated from photographs, and I get good results this way. I can't attest to whether other types of vide (anime, for example) also work well, or whether some tweaking might be required.

    garbitJan 28, 2026· 2 reactions
    CivitAI

    if I have more than 2 clips it does not give a video output, it seems to stop processing after the 2nd clip frame processing is done and will not even get to the ksampler generation
    tried everything including restart, new dirs everything, reset to 0, nothing seems to help
    any ideas how to get an output with more than 2 videos? thanks

    __Bob__
    Author
    Jan 28, 2026· 1 reaction

    If you are running the batch version of the workflow and you have 5 videos to join, you must queue the video to run 4 times. On the final run it will join the work files into the final video.

    The loop version of the workflow is meant to do the same thing with one queued run. This version is less reliable, however.

    garbitJan 28, 2026

    @__Bob__ so you run it 4 times and it will auto join it on the last run? is that right?... I will try it

    ok, it works, but I've got to be honest, I don't know why this isn't the first thing that's mentioned big bold letters everywhere, that's like the key component of the batch workflow and it's not even mentioned on the workflow itself... baffling... I'm probably blind though, that's the other possibility 😊

    having said that, it works great so far, huge props for creating this. really excellent work!!

    __Bob__
    Author
    Jan 28, 2026

    @garbit The giant green note in the workflow that is titled HOW TO USE THIS WORKFLOW covers this.

    I'm glad you were successful.

    kAIboldMar 29, 2026

    @__Bob__ I'm getting this same error of the work videos outputting, but nothing more. However I am indeed queuing up the right number for the videos. I am getting this minor error:

    * Basic data handling: FlowSelect 577:

    - Exception when validating inner node: tuple index out of range

    Output will be ignored

    I think this here would be the cause. How would I fix this?

    __Bob__
    Author
    Mar 29, 2026· 1 reaction

    @kAIbold Can you check what version of the Wan VACE Prep custom node package you have installed? You need v1.0.12 for this workflow update. I've noticed that for some reason, ComfyUI Manager still has v1.0.11 marked "Latest" even though v1.0.12 is also present. You may need to manually update to v1.0.12.
    If you do need to update, you should completely close this workflow and open a new copy of it, since ComfyUI may have saved a broken copy with disconnected nodes.

    If this turns out not to be the problem, I'll still be here. :)

    kAIboldMar 30, 2026

    @__Bob__ Thank you kindly for your assistance! I reinstalled the nodes manually, but strangely they show as v1.0.11 in ComfyUI, but when I open the file itself it shows 1.0.12. I used a fresh workflow to test it. The initial error (577) has disappeared, but the behaviour remains the same. It outputs four large videos of my videos and nothing more. The errors currently are this:

    Failed to validate prompt for output 499:

    * ColorMatch 587:586:

    - Required input is missing: image_target

    * ComfySwitchNode 780:

    - Required input is missing: on_false

    Output will be ignored

    Failed to validate prompt for output 366:

    Output will be ignored

    I think the colormatch error was happening before, I simply ignored it as I wasn't using it. The error on 366 and 780 seem to be new.

    __Bob__
    Author
    Mar 30, 2026· 1 reaction

    @kAIbold This error is caused by the ongoing ComfyUI frontend shenanigans. They released a major update to the frontend without adequate testing and it broke subgraphs for everybody. One major bug in the update causes subgraph connections to become disconnected. That's what your new error message is reporting.
    Apparently some things in the frontend have since been fixed, but not all. If it's at all possible, I recommend downgrading your ComfyUI installation until the chaos passes. Frontend 1.39.19 is the last known good version.

    Another user reports the same issue and talks about their solution in this thread: https://civitai.com/models/2024299?dialog=commentThread&commentId=1146441

    Sorry for the trouble, but we are at the mercy of the ComfyUI devs for now.

    kAIboldMar 30, 2026

    @__Bob__ Hey no problem. Thanks for your help! It's quite appreciated.

    negg22Jan 31, 2026· 1 reaction
    CivitAI

    This is a really sweet workflow that works perfectly. I have a question, if I only want to have 3 frames in between clips, what should the settings be? Can I set replace frames to 0 and newframes to 2? (2+1 =3)

    __Bob__
    Author
    Jan 31, 2026

    I'm glad you find it useful!

    The Wan model always wants to generate 4n+1 frames. This is why the inputs on the VACE Prep node force multiples of 4 for the parameters. However, you could override add_frames by wiring an INT primitive with a value of 2 to the widget input. If you did this, I think Wan VACE would probably round down to the nearest 4n+1 and give you 1 generated frame instead of the 3 you requested.

    Edit: I just realized we're talking about the full "heavy" workflow. The override I described pertains to the Lightweight workflow. The underlying 4n+1 constraint is still there, but the mechanics of passing "illegal" values is different. In this case, you can doubleclick the number on the sliders to enter an arbitrary value if you don't want to be bound by the rules the slider is enforcing.

    Either way, I think Wan fill foil you. You can have 1 frame or 5 frames in between. I don't think there's a way around this without editing the image batch or something.

    negg22Jan 31, 2026· 1 reaction

    @__Bob__ i see! If the wan vace forces 1 or 5 frames and my aim is to hit 80 frames, then perhaps the best way is to load only 75 frames of my 77 frame clips, so that the result is 80 frames. Thanks for your advice!

    grasshopper85116Feb 2, 2026· 1 reaction
    CivitAI

    Worked pretty well for me first shot. Using vace gguf q8 models on rtx 5090.

    I am just wondering, is it ideal to join 2 clips together that have totally different camera angles? Say, create new frames in the middle that transitions smoothly to the different camera angle of the next clip? If so, what paramter should i adjust in the configurator in the workflow?

    Or is something like that not ideal for the purpose of this workflow?

    __Bob__
    Author
    Feb 2, 2026

    VACE is best at interpolating movement from motion that is already present in the contexts you give it. Going from static shot A to static shot B may be difficult. A first-last-frame workflow might accomplish this kind of transition better, but it's easy enough to try with VACE to see whether it will work.

    You may need to prompt for the camera movement you want. Think about how much time is needed for the transition to occur and make sure you're giving the model enough frames to accomplish it with either replace frames, add frames, or a combination of the two.

    grasshopper85116Feb 3, 2026

    @__Bob__ 

    ok thank you. Yea i quickly tried this placing 16 new frames in the middle. It seemed to work ok, but i think 16 frames is too small the transition was very quick to the next scene. I might try with 32 frames see how that looks.

    Secondly, do you know of a good vace first last frame workflow that can do atleast 2 or 3, 5 second clips without color shift and degradation like svi 2.0 pro can do.

    SVI 2.0 pro has great quality in long video, but I get very bad prompt adherence with it, so I am looking at alternative methods for long video generation.

    __Bob__
    Author
    Feb 3, 2026

    @grasshopper85116 For straight first-last frame, the stock FFLF2V template included with ComfyUI is usually better than a VACE first-last frame workflow. In my experience, you get better motion from the base Wan models. If you stay within 81 frames, the base models don't tend to produce the color artifacts that VACE can sometimes produce.

    Regarding long video generation, IMO every Wan solution is an imperfect hack. Wan was trained for 81 frames, so any method to go beyond that is fighting the model. There will always be tradeoffs. If you really need to generate longer videos in one go, try a different model built for that, like LTX-2 maybe.

    grasshopper85116Feb 4, 2026· 1 reaction

    @__Bob__ no probs thanks. By the way i tried clip joiner, and added 3 sdconds (48 frames) of new frames in the middle, and it worked surprisingly well, and I didn't seen color shift either. This is great!

    Workflows
    Wan Video 14B t2v

    Details

    Downloads
    0
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    1/4/2026
    Updated
    4/27/2026
    Deleted
    1/6/2026

    Files

    wanVACEClipJoinerNative_lightweightV102.zip