New feature: seamless looping
ComfyUI Frontend Compatibility Notice
Affected versions: ComfyUI_frontend 1.40.x – 1.42.9 (known good: <= 1.39.19 or >= 1.42.10)
Recent ComfyUI frontend updates have introduced significant issues with subgraph functionality that affect this workflow.
If you are affected, this message appears in your ComfyUI console right after you start a workflow run:
Failed to validate prompt for output 499:
* ColorMatch 587:586:
- Required input is missing: image_target
* Basic data handling: IfElse 598:
- Required input is missing: if_falseThe workflow may appear to run correctly, but only parts of it will actually produce output. It won't finish with a properly joined video.
If you see this warning and the workflow isn't running as expected, downgrade your ComfyUI frontend to 1.39.19 or upgrade to 1.42.10, and reload a fresh copy of the workflow.
What it Does
Point this workflow at a directory of clips and it will automatically stitch them together. It's designed to work well with a few clips or dozens. At each transition, Wan VACE generates new frames guided by context on both sides, replacing the seam with motion that flows naturally between the clips. Noisy or artifacted frames at clip boundaries get replaced in the same pass. How many context frames and generated frames are used is configurable.
The workflow runs with either Wan 2.1 VACE or Wan 2.2 Fun VACE. Input clips can come from anywhere - Wan, LTX-2, phone footage, stock video, whatever you have.
If you want the result to loop cleanly, there's a toggle for that.
Usage
Put your input clips in their own directory, named so they sort in the order you want them joined.
Configure the workflow parameters. The notes in the workflow have full details on each one.
Set the index to 0.
Queue the workflow. You need to queue it once per transition. That's N-1 times for N clips, or N times if looping is enabled.
Setup
This is not a ready to run workflow. You need to configure it to fit your system.
What runs well on my system will not necessarily run well on yours. Configure this workflow to use a VACE model of the same type that you use in your standard Wan workflow. Detailed configuration and usage instructions can be found in the workflow. Please read carefully.
Dependencies
I've used native nodes and tried to keep the custom node dependencies to a minimum. The following packages are required. All of them are installable through the Manager.
ComfyUI-Wan-VACE-Prep v1.0.12 or higher
Note: I have not tested this workflow under the new Nodes 2.0 UI.
Configuration and Models
You'll need some combination of these models to run the workflow. As already mentioned, this workflow will not run properly on your system until you configure it properly. You probably already have a Wan video generation workflow that runs well on your system. You need to configure this workflow similarly to your generation workflow.
Wan 2.2 Fun VACE
Wan 2.1 VACE
Kijai’s extracted Fun Vace 2.2 modules, for loading along with standard T2V models. Native use examples here.
The Sampler subgraph contains KSampler nodes and model loading nodes. Inference is isolated in subgraphs, so it should be easy to modify this workflow for your preferred setup. Replace the provided sampler subgraph with one that implements your setup, then plug it into the workflow. Have your way with these until it feels right to you.
Just make sure all the subgraph inputs and outputs are correctly getting and setting data, and crucially, that the diffusion model you load is one of Wan2.2 Fun VACE or Wan2.1 VACE. GGUFs work fine, but non-VACE models do not. An example alternate sampler subgraph for VACE 2.1 is included.
Enable sageattention and torch compile if you know your system supports them.
Troubleshooting
The size of tensor a must match the size of tensor b at non-singleton dimension 1 - Check that both dimensions of your input videos are divisible by 16 and change this if they're not. Fun fact: 1080 is not divisible by 16!
Brightness/color shift - VACE can sometimes affect the brightness or saturation of the clips it generates. I don't know how to avoid this tendency, I think it's baked into the model, unfortunately. Disabling lightx2v speed loras can help, as can making sure you use the exact same lora(s) and strength in this workflow that you used when generating your clips. Some people have reported success using a color match node before output of the clips in this workflow. I think specific solutions vary by case, though. The most consistent mitigation I have found is to interpolate framerate up to 30 or 60 fps after using this workflow. The interpolation decreases how perceptible the color shift is. The shift is still there, but it's spread out over 60 frames instead over 16, so it doesn't look like a sudden change to our eyes any more.
Regarding Framerate - The Wan models are trained at 16 fps, so if your input videos are at some higher rate, you may get sub-optimal results. At the very least, you'll need to increase the number of context and replace frames by whatever factor your framerate is greater than 16 fps in order to achieve the same effect with VACE. I suggest forcing your inputs down to 16 fps for processing with this workflow, then re-interpolating back up to your desired framerate.
IndexError: list index out of range - Your input video may be too small for the parameters you have specified. The minimum size for a video will be
(context_frames + replace_frames) * 2 + 1. Confirm that all of your input videos have at least this minimum number of frames.If you can't make the workflow work, update ComfyUI and try again. If you're not willing to update ComfyUI, I can't help you. We have to be working from the same starting point.
Feel free to open an issue on github. This is the most direct way to engage me. If you want a head start, paste your complete console log from a failed run into your issue.
Changelog
v2.5
Seamless Loops - Enable the
Make Looptoggle and the workflow will generate a smooth transition between your final input video and the first one, allowing the video to be played on a loop.Much lower RAM usage during final assembly - Enabled by default, VideoHelperSuite's Meta Batch Manager drastically reduces the amount of system RAM consumed while concatenating frames. If you were running out of RAM on the final step because you were joining hundreds or thousands of frames, that shouldn't be a problem any more. Additional details in the workflow notes.
v2.4 Minor tweaks. Adjust sage attention, torch compile defaults.
v2.3 This release prioritizes workflow reliability and maintainability. Core functionality remains unchanged. These changes reduce surface area for failures and improve debuggability. Stability and deterministic operation take priority over convenience features.
Looping workflow discontinued – While still functional, the loop-based approach obscured workflow status and complicated targeted reruns for specific transitions. The batch workflow provides better visibility and control.
Reverted to lossless fv1 intermediate files – The 16-bit PNG experiment provided no practical benefit and made addressing individual joins more cumbersome. Returning to the proven method.
New custom nodes for cleaner workflows – WAN VACE Prep Batch and VACE Batch Context encapsulate operations that are awkward to express in visual nodes but straightforward in Python. Load Videos From Folder (simple) replaces the KJNodes equivalent to eliminate problematic VideoHelperSuite dependencies that fail in some environments.
Enhanced console logging – Additional diagnostic output when
Debug=Trueto aid troubleshooting.Fewer custom node dependencies
The Lightweight Workflow has moved to its own page. Check it out if you just need to quickly join two clips without the overhead required by the full workflow.
v2.2 Complexity Reduction Release
Removed fancy model loader which was causing headaches for safetensors users without any gguf models installed, and vice-versa.
Removed the MOE KSampler and TripleKSampler subgraphs. You can still use these samplers, but it's up to you to bring them and set them up.
Custom node dependencies reduced.
Un-subgraphed some functions. Sadly, this powerful and useful feature is still too unstable to distribute to users on varying versions of ComfyUI.
Updated documentation.
v2.1
Add Prune Outputs to Video Combine nodes, preventing extra frames from being added to the output
v2.0 - Workflow redesign. Core functionality is the same, but hopefully usability is improved
(Experimental) New looping workflow variant that doesn't require manual queueing and index manipulation. I am not entirely comfortable with this version and consider it experimental. The ComfyUI-Easy-Use For Loop implementation is janky and requires some extra, otherwise useless code to make it work. But it lets you run with one click! Use with caution. All VACE join features are identical between the workflows. Looping is the only difference.
(Experimental) Added cross fade at VACE boundaries to mitigate brightness/color shift
(Experimental) Added color match for VACE frames to mitigate brightness/color shift
Save intermediate work as 16 bit png instead of ffv1 to mitigate brightness/color shift
Integrated video join into the main workflow. It will run automatically after the last iteration. No more need to run the join part separately.
More documentation
Inputs and outputs are logged to the console for better progress tracking
v1.2 - Minor Update 2025-Oct-13
Sort the input directory list.
v1.1 - Minor Update 2025-Oct-11
Preserve input framerate in workflow VACE outputs. Previously, all output was forced to 16fps. Note, you must manually set the framerate in the Join & Save output.
Changed default model/sampler to Wan 2.2 Fun VACE fp8/KSampler. GGUF, MoE, 2.1 are still available in the bypassed subgraphs.
Description
v2.5
If you're upgrading from a previous version, be sure to also upgrade the Wan VACE Prep node package too. This version of the workflow requires node v1.0.12 or higher.
Seamless Loops - Enable the
Make Looptoggle and the workflow will generate a smooth transition between your final input video and the first one, allowing the video to be played on a loop.Much lower RAM usage during final assembly - Enabled by default, VideoHelperSuite's Meta Batch Manager drastically reduces the amount of system RAM consumed while concatenating frames. If you were running out of RAM on the final step because you were joining hundreds or thousands of frames, that shouldn't be a problem any more. Additional details in the workflow notes.
FAQ
Comments (30)
@__Bob__ just wanted to confirm that the workflow is currently broken on the latest stable branch of ComfyUI (v0.18.3). @agentgerbil's comment is the same error everyone gets with it.
It's not caused (directly) by the ComfyUI subgraph issue which has since been fixed; it seems version 2.5 of this workflow was actually uploaded with a disconnected node. Downgrading ComfyUI front-ends won't fix it.
Instead, go into the "Wan 2.2 VACE - 2 KSampler" subgraph and connect the VAE Decode to the subgraph exit node.
I'm sure there are other errors (didn't have time to look), but that will at least generate the connecting clips in the "vace-work" folder.
Also wanted to say thanks for the amazing work! Cheers
I have confirmed that version 2.5 of this workflow does not have a disconnected node. The downloads here and at github are connected properly and will run fine on a version of ComfyUI that is not compromised by the recent bad comfyui_frontend_package update.
- ComfyUI 0.17.0 + ComfyUI_frontend 1.39.19
- ComfyUI 0.18.1 + ComfyUI_frontend 1.39.19
- ComfyUI 0.18.1 + ComfyUI_frontend 1.42.8 (note: frontend 1.42.8 is still profoundly broken in many ways, but it correctly loads and runs this workflow)
@drowai443 before you downgraded, you may have inadvertently saved the workflow in a broken state and then reloaded that after you fixed your ComfyUI installation.
@__Bob__ That's very strange. I have multiple machines with multiple installations, and it occurred on both. I never downgraded this PC and ran your workflow for the first time on v0.18.3.
Then I downloaded it and ran it on another PC using an older version of Comfy frontend (v1.39.19 - which, to my knowledge, was prior to any of the subgraph breaking nonsense), and it also had the same issue.
I can't explain what went wrong since I never saved or reloaded it. Two separate machines, two separate versions, two separate downloads. Not sure how I could've cross-contaminated that, but I agree that it does seem the likeliest culprit if you've confirmed the nodes are attached. Sorry if I made you go on a goose chase.
Fantastic work, and the extension nodes make very easy to incorporate it into custom wf.
I'm sorry, but I still don't understand how to use it even after trying for three days. When I run it with three clips (A+B+C) twice with the following conditions: INDEX=0, INPUT=C:\AI\ComfyUI\output\jungle\vace-work, project=jungle, cont=16, repl=16, newframe=8, it only generates a 1-second clip1 and a 3-second clip3, and the combined 15-second video is not created.
If there are no clip2 files in your work directory, that means inference never runs. What warnings are displayed in your console when you run the workflow? If they resemble the error @agentgerbil reported here, you are probably affected by the bad ComfyUI frontend update, which causes subgraph connections to disconnect, among other things. Currently the best solution to the broken ComfyUI frontend is to downgrade until the problem is solved.
If your console shows different errors, feel free to post them here or open an issue at github.
Thank you, I was able to create a 15-second video. The problem is solved. It was an issue with the Post-Processing: Color Match subgraph. Good job!
This workflow only supports up to 99 videos for concatenation. Last night, I attempted to concatenate 150 files, then went to sleep. When I woke up, I found that all the videos were in disordered sequence—video 10 was concatenated with video 110, and so on. Even when I renamed the files as "010" and "0110", it made no difference. I hope the workflow can support concatenating more video clips.
Hi @zs7758,
The workflow actually supports up to 999 clips, not 99. The VACE Batch Context node zero-pads work filenames to 3 digits (001, 002, ... 999), so you have plenty of headroom for 150 files. This is an artificial upper limit that could be easily increased if it ever does cause a problem.
As you discovered, input files also must be zero-padded so they sort correctly. This is covered in the workflow notes (How To Use This Workflow) under Step 5: Prepare Inputs. Without zero-padding, the OS sorts lexicographically, putting "10" right after "1", "110" right after "11", etc., which explains exactly what you saw. Your instinct to rename the files was correct, although looking at your example, 010 and 0110 use different padding lengths (3 digits vs. 4), which would still sort incorrectly. All filenames need the same number of digits throughout. My best guess is that inconsistent padding is why your second run still produced a bad result.
I'll also mention that it's easy to forget to clear out the vace-work directory between runs. Leftover files from a previous attempt can also lead to output problems.
So, going forward:
- Review Step 5 in the workflow notes for input naming guidance. Make sure your 150 input clips use consistent zero-padding (001.mp4 - 150.mp4, or video001.mp4).
- Clear vace-work before any fresh run.
- If you have previously run the workflow without restarting ComfyUI: clear the run cache ( C->Edit->Unload Models and Execution Cache).
Let me know if you're still seeing issues after this. I have certainly processed over 100 inputs myself with this workflow, so I know it's possible. There are a lot of moving parts and it's easy to miss a step when you're working with it for the first time.
@__Bob__ Thank you. I'm running this workflow after modifying the file name and I hope everything goes well
@__Bob__ It seems that the processing duration of each video in the latest version has increased. In version 2.4, a video is approximately 180 seconds long, while in the current version 2.5, it takes around 210 seconds
@zs7758 Hmm. The recent update involved some simple node changes, not anything related to core processing, so I don't know why this would be. Maybe some inference parameter changed, or you're generating more frames, or at a higher resolution?
Anyway, speed isn't the focus here. The goal is to produce high quality clip transitions, so hopefully that part is still working!
@__Bob__ I’ve always been puzzled by this: generating a five-second video using Wan 2.2 takes only 160 seconds; yet, generating just 16 frames using VACE takes even longer than it does for Wan 2.2 to produce that entire five-second clip. Why is that?
@zs7758 All things being equal, VACE generation should take roughly the same time and resources as any Wan generation. If that’s not your experience, there must be workflow differences: model quantization, inference parameters, number of frames, resolution, etc.
Sorry to bother,it seems to be an excellent workflow,but i can't get what i want.I only have two clips,and i renamed them correctly according to the notes.I use Wan2.2 VACE model,and run the workflow. It only took 9 seconds to finish the job,i think something was wrong,and i check the output.I can only get two videos in "vace-work"folder,and both of them are mirrored version of my original clips.And i don't know how to make group "Join Frames into Final Video" to run.I run several times,every time i got the same results.
It sounds like the workflow is not running completely. Check the console for warning messages. There is a widespread known problem with certain buggy versions of ComfyUI. Read the description here for more information.
If your problem doesn't seem to be related to the broken Comfyui_frontend package, let me know here or open an issue on github, and we can troubleshoot it.
Edited to add: If you just have two clips to join, you might want to try the lightweight workflow, whose purpose is to join two clips with a minimum of fuss. It has fewer moving parts and doesn't really require any setup like this workflow does.
@__Bob__ Thank you,I will try the lightweight workflow!
@__Bob__ The lightweight workflow works perfectly for me! Thank you!!Works like a charm!
Really awesome workflow, thank you!
One question please?
By and large, pressing the blue button ( VACE Outpaint Prep ) does not load. If I open another instance of the workflow in a new tab, it automatically loads the image in the VACE Outpaint Prep that didn't show prior. Gratitude for any assistance.
Hi,
We should be having this conversation on the Outpaint workflow page, not the VACE Joiner workflow page. :)
Weird. Aside from the delayed loading, is everything working ok? Do you know how to use the dev console in your browser? It might be interesting to see what shows there as you try to run the node to load a frame. But it's ok that is something you're not familiar with. I wish I had an explanation for what you describe, but I've never seen that behavior.
@__Bob__ Hey my apologies about the wrong area. Switching on Civitai is like not being hungry until you open the fridge for water and there it is; all assorted fruits and vegetables suddenly cloud one's thoughts. :P Levity aside, it may be my browser not refreshing when I click the blue button, but a quick workaround is to just click on another tab and then come back and there it is.
Really appreciate your quality work and the extra you put in for clarity, aesthetics and to just make it work. You came to the party late, but people are already getting to know Bob.
Cheers!
This is an EXCELLENT workflow. Very well organized. only problem I initially encountered is that it seemed to be working when it wasn't, because I did not have the distillation loras downloaded. The sampler wouldn't run but there was no direct fault. But of course there was also never a final stitched video. bypassing the loras, or downloading them (even if you do not use them) makes it work.
Other than that, great work! I had to write a custom node so that I can stitch any number of subfolders inside a folder. this way I can queue 100 runs overnight and get them done efficiently.
Yeah, ComfyUI tries to do us favors by running parts of the workflow even when it can't run the whole thing. If you're not paying attention to console warnings, it can look like a workflow is running even when it's just kind of stepping on a rake over and over. The recent frontend issues caused similar quiet failures, leading people to not realize the workflow wasn't properly running. This is a ComfyUI thing that I don't have control over in the workflow. Frustrating.
Does your custom node just vary the project directory name, or are you doing something fancier?
Thanks for the kind words! :)
@__Bob__ I am doing something much fancier. The node keep track of what is already worked on using json "save" files. So I can cancel at any moment then restart later and it will continue where it left off, even after a complete system reboot. It also moves the video files so that they are on their own so that there is no other random garbage. It also exports the project name and the video prefix plus a few other vars I use. Ultimately I wanted something I can point to a dir and say "do all of that" without having to think about how many times to run and so on.
The workflow works perfecly except when I turned on seamless loop. I have 2 video that I want to combine, each video is 5 second long, I set the index to 0 and I run the workflow 2 times but the result video is only 2 second long and it is not looping. Do you know what caused this? @__Bob__
Hi @celestialnovel2686, in your ComfyUI console, what are the 10 or so lines following "got prompt" when you run the workflow?
@__Bob__ These are the lines:
@__Bob__ These
got prompt
got prompt
RAM清理完成 [15.7% → 13.6%, 释放: 1367MB]
VRAM清理完成 [卸载模型: True, 清空缓存: True]
[VACE Batch Context] Loop enabled: run once more at index 1 to generate loop transition.
[VACE Batch Context] === Start ===
[VACE Batch Context] Index: 0 (videos 1-2 of 2)
[VACE Batch Context] Input directory: G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output\input\breastsplay3
[VACE Batch Context] Video 1: 01.mp4
[VACE Batch Context] Video 2: 02.mp4
[VACE Batch Context] Work prefix: breastsplay6/vace-work/index000
[VACE Batch Context] === End ===
[VACE Join Batch] === Start ===
[VACE Join Batch] Video 1: 465 frames @ 1920x1088
[VACE Join Batch] Video 2: 465 frames @ 1920x1088
[VACE Join Batch] Flags: MIDDLE
[VACE Join Batch] Parameters: context=8, replace=8, new=0
[VACE Join Batch] Outputs:
[VACE Join Batch] control_video: torch.Size([33, 1088, 1920, 3])
[VACE Join Batch] control_mask: torch.Size([33, 1088, 1920])
[VACE Join Batch] start_images: 433 frames
[VACE Join Batch] end_images: 0 frames
[VACE Join Batch] VACE output: 33 frames (16 context + 17 generated)
[VACE Join Batch] === End ===
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)
Requested to load WanTEModel
loaded completely; 10835.48 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5978.37 MB usable, 5505.80 MB loaded, 11038.05 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:21<00:00, 65.43s/it]
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:21<00:00, 65.36s/it]
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Prompt executed in 00:11:47
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_high_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_high_noise_14B_fp8_scaled.safetensors
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_low_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_low_noise_14B_fp8_scaled.safetensors
RAM清理完成 [13.5% → 12.5%, 释放: 671MB]
VRAM清理完成 [卸载模型: True, 清空缓存: True]
[VACE Batch Context] === Start ===
[VACE Batch Context] Index: 1 (videos 2-1 of 2) [LOOP TRANSITION]
[VACE Batch Context] Input directory: G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output\input\breastsplay3
[VACE Batch Context] Video 1: 02.mp4
[VACE Batch Context] Video 2: 01.mp4
[VACE Batch Context] Work prefix: breastsplay6/vace-work/index001
[VACE Batch Context] === End ===
[VACE Join Batch] === Start ===
[VACE Join Batch] Video 1: 465 frames @ 1920x1088
[VACE Join Batch] Video 2: 465 frames @ 1920x1088
[VACE Join Batch] Flags: MIDDLE
[VACE Join Batch] Parameters: context=8, replace=8, new=0
[VACE Join Batch] Outputs:
[VACE Join Batch] control_video: torch.Size([33, 1088, 1920, 3])
[VACE Join Batch] control_mask: torch.Size([33, 1088, 1920])
[VACE Join Batch] start_images: 433 frames
[VACE Join Batch] end_images: 0 frames
[VACE Join Batch] VACE output: 33 frames (16 context + 17 generated)
[VACE Join Batch] === End ===
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)
Requested to load WanTEModel
loaded completely; 10835.48 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:33<00:00, 68.31s/it]
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:21<00:00, 65.38s/it]
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
[Load Videos] Batched: Starting new generator for 4 videos (932 frames) in G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output/breastsplay6/vace-work
[Load Videos] Batched: Opening [1/4]: index000_clip1_00001.mkv
[Load Videos] Batched: Yielding 160 frames shape=(160, 1088, 1920, 3)
Prompt executed in 00:13:18
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_high_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_high_noise_14B_fp8_scaled.safetensors
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_low_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_low_noise_14B_fp8_scaled.safetensors
RAM清理完成 [31.3% → 26.3%, 释放: 3267MB]
VRAM清理完成 [卸载模型: True, 清空缓存: True]
Meta-Batch 1/nan
[VACE Batch Context] === Start ===
[VACE Batch Context] Index: 1 (videos 2-1 of 2) [LOOP TRANSITION]
[VACE Batch Context] Input directory: G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output\input\breastsplay3
[VACE Batch Context] Video 1: 02.mp4
[VACE Batch Context] Video 2: 01.mp4
[VACE Batch Context] Work prefix: breastsplay6/vace-work/index001
[VACE Batch Context] === End ===
[VACE Join Batch] === Start ===
[VACE Join Batch] Video 1: 465 frames @ 1920x1088
[VACE Join Batch] Video 2: 465 frames @ 1920x1088
[VACE Join Batch] Flags: MIDDLE
[VACE Join Batch] Parameters: context=8, replace=8, new=0
[VACE Join Batch] Outputs:
[VACE Join Batch] control_video: torch.Size([33, 1088, 1920, 3])
[VACE Join Batch] control_mask: torch.Size([33, 1088, 1920])
[VACE Join Batch] start_images: 433 frames
[VACE Join Batch] end_images: 0 frames
[VACE Join Batch] VACE output: 33 frames (16 context + 17 generated)
[VACE Join Batch] === End ===
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)
Requested to load WanTEModel
loaded completely; 10835.48 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:22<00:00, 65.65s/it]
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:22<00:00, 65.65s/it]
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
[Load Videos] Batched: Starting new generator for 6 videos (1398 frames) in G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output/breastsplay6/vace-work
!!! Exception during processing !!! Batched loader produced no frames
Traceback (most recent call last):
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 534, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 334, in get_output_data
return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 171, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 308, in asyncmap_node_over_list
await process_inputs(input_dict, i)
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-wan-vace-prep\load_videos_from_folder.py", line 58, in load_videos
return self._load_batched(video_files, folder_path, debug, meta_batch, unique_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-wan-vace-prep\load_videos_from_folder.py", line 125, in loadbatched
raise RuntimeError("Batched loader produced no frames")
RuntimeError: Batched loader produced no frames
Prompt executed in 00:12:17
@celestialnovel2686 Sorry for the delay in responding.
Thank you for sharing your logs. They show that the workflow was acutally run three times. The final run generated duplicate transitions between your last video and the first.[Load Videos] Batched: Starting new generator for 6 videos (1398 frames) shows us six work files in the last iteration, but we only expect four. ComfyUI's execution cache likely caused the final error that ended the workflow. The Meta Batch Manager node tried to run with state information from the previous iteration and couldn't continue because it thought there were no more frames to process.
Despite all this, it appears that the first two iterations of the workflow were successful. I would expect it to have produced a properly joined output file in ComfyUI/output/[projectname]/joined_0000x.mp4. You mentioned a 2 second long result video. Did it have this name? Or were you looking at a work file from ComfyUI/output/[projectname]/vace-work/ ? Files in vace-work are only temporary work files, not the final product.
Final note: when re-running the workflow after success or failure, always clear the ComfyUI cache before you start a new batch. C->Edit->Unload Models and Execution Cache
There are no obvious problems in your log except for that extra run. If you haven't already, I suggest you try again.
- Delete ComfyUI/output/[projectname]/vace-work
- Clear ComfyUI Execution Cache
- Set Index=0
- Queue the workflow 2 times
@__Bob__ I did that and this time I get an error message. The workflow still produce a video though, in the [projectname]/joined_0000x.mp4. It is not in the vace-work folder which means the video is the result one and not the temporary one. The result video contains 7 second video but it is not looping.
This is the error:
got prompt
got prompt
[VACE Batch Context] Loop enabled: run once more at index 1 to generate loop transition.
[VACE Batch Context] === Start ===
[VACE Batch Context] Index: 0 (videos 1-2 of 2)
[VACE Batch Context] Input directory: G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output\input\breastsplay5
[VACE Batch Context] Video 1: 01.mp4
[VACE Batch Context] Video 2: 02.mp4
[VACE Batch Context] Work prefix: breastsplay9/vace-work/index000
[VACE Batch Context] === End ===
FETCH ComfyRegistry Data: 45/139
FETCH ComfyRegistry Data: 50/139
FETCH ComfyRegistry Data: 55/139
FETCH ComfyRegistry Data: 60/139
FETCH ComfyRegistry Data: 65/139
FETCH ComfyRegistry Data: 70/139
FETCH ComfyRegistry Data: 75/139
FETCH ComfyRegistry Data: 80/139
FETCH ComfyRegistry Data: 85/139
FETCH ComfyRegistry Data: 90/139
FETCH ComfyRegistry Data: 95/139
FETCH ComfyRegistry Data: 100/139
[VACE Join Batch] === Start ===
[VACE Join Batch] Video 1: 465 frames @ 1920x1088
[VACE Join Batch] Video 2: 465 frames @ 1920x1088
[VACE Join Batch] Flags: MIDDLE
[VACE Join Batch] Parameters: context=8, replace=8, new=0
[VACE Join Batch] Outputs:
[VACE Join Batch] control_video: torch.Size([33, 1088, 1920, 3])
[VACE Join Batch] control_mask: torch.Size([33, 1088, 1920])
[VACE Join Batch] start_images: 433 frames
[VACE Join Batch] end_images: 0 frames
[VACE Join Batch] VACE output: 33 frames (16 context + 17 generated)
[VACE Join Batch] === End ===
FETCH ComfyRegistry Data: 105/139
FETCH ComfyRegistry Data: 110/139
FETCH ComfyRegistry Data: 115/139
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)
FETCH ComfyRegistry Data: 120/139
Requested to load WanTEModel
loaded completely; 10835.48 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
FETCH ComfyRegistry Data: 125/139
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
FETCH ComfyRegistry Data: 130/139
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
FETCH ComfyRegistry Data: 135/139
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
loaded partially; 5978.37 MB usable, 5505.80 MB loaded, 11038.05 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:23<00:00, 65.81s/it]
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5972.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:22<00:00, 65.72s/it]
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Prompt executed in 00:12:28
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_high_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_high_noise_14B_fp8_scaled.safetensors
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_low_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_low_noise_14B_fp8_scaled.safetensors
[VACE Batch Context] === Start ===
[VACE Batch Context] Index: 1 (videos 2-1 of 2) [LOOP TRANSITION]
[VACE Batch Context] Input directory: G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output\input\breastsplay5
[VACE Batch Context] Video 1: 02.mp4
[VACE Batch Context] Video 2: 01.mp4
[VACE Batch Context] Work prefix: breastsplay9/vace-work/index001
[VACE Batch Context] === End ===
[VACE Join Batch] === Start ===
[VACE Join Batch] Video 1: 465 frames @ 1920x1088
[VACE Join Batch] Video 2: 465 frames @ 1920x1088
[VACE Join Batch] Flags: MIDDLE
[VACE Join Batch] Parameters: context=8, replace=8, new=0
[VACE Join Batch] Outputs:
[VACE Join Batch] control_video: torch.Size([33, 1088, 1920, 3])
[VACE Join Batch] control_mask: torch.Size([33, 1088, 1920])
[VACE Join Batch] start_images: 433 frames
[VACE Join Batch] end_images: 0 frames
[VACE Join Batch] VACE output: 33 frames (16 context + 17 generated)
[VACE Join Batch] === End ===
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:22<00:00, 65.75s/it]
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:34<00:00, 68.66s/it]
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
[Load Videos] Batched: Starting new generator for 4 videos (932 frames) in G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output/breastsplay9/vace-work
[Load Videos] Batched: Opening [1/4]: index000_clip1_00001.mkv
[Load Videos] Batched: Yielding 160 frames shape=(160, 1088, 1920, 3)
Prompt executed in 00:15:09
[MultiGPU_Memory_Monitor] CPU usage (86.4%) exceeds threshold (85.0%)
[MultiGPU_Memory_Management] Triggering PromptExecutor cache reset. Reason: cpu_threshold_exceeded
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_high_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_high_noise_14B_fp8_scaled.safetensors
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_low_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_low_noise_14B_fp8_scaled.safetensors
Meta-Batch 1/6
[Load Videos] Batched: Yielding 160 frames shape=(160, 1088, 1920, 3)
Prompt executed in 13.31 seconds
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_high_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_high_noise_14B_fp8_scaled.safetensors
[LoRA-Manager] Calculating hash for checkpoint 'wan2.2_fun_vace_low_noise_14B_fp8_scaled' from G:/G SSD AI/ComfyUI_windows_portable/comfyuiqwen/ComfyUI_windows_portable/ComfyUI/models/diffusion_models/wan2.2_fun_vace_low_noise_14B_fp8_scaled.safetensors
Meta-Batch 2/nan
[VACE Batch Context] === Start ===
[VACE Batch Context] Index: 1 (videos 2-1 of 2) [LOOP TRANSITION]
[VACE Batch Context] Input directory: G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output\input\breastsplay5
[VACE Batch Context] Video 1: 02.mp4
[VACE Batch Context] Video 2: 01.mp4
[VACE Batch Context] Work prefix: breastsplay9/vace-work/index001
[VACE Batch Context] === End ===
[VACE Join Batch] === Start ===
[VACE Join Batch] Video 1: 465 frames @ 1920x1088
[VACE Join Batch] Video 2: 465 frames @ 1920x1088
[VACE Join Batch] Flags: MIDDLE
[VACE Join Batch] Parameters: context=8, replace=8, new=0
[VACE Join Batch] Outputs:
[VACE Join Batch] control_video: torch.Size([33, 1088, 1920, 3])
[VACE Join Batch] control_mask: torch.Size([33, 1088, 1920])
[VACE Join Batch] start_images: 433 frames
[VACE Join Batch] end_images: 0 frames
[VACE Join Batch] VACE output: 33 frames (16 context + 17 generated)
[VACE Join Batch] === End ===
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
[MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:0 (current_text_encoder_device=cuda:0)
Requested to load WanTEModel
loaded completely; 10835.48 MB loaded, full load: True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:43<00:00, 70.94s/it]
Found quantization metadata version 1
Detected mixed precision quantization
Using mixed precision operations
model weight dtype torch.float16, manual cast: torch.float16
model_type FLOW
Requested to load WAN21_Vace
loaded partially; 5970.37 MB usable, 5497.17 MB loaded, 11046.68 MB offloaded, 472.56 MB buffer reserved, lowvram patches: 321
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [04:23<00:00, 65.78s/it]
Requested to load WanVAE
0 models unloaded.
loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0
[Load Videos] Batched: Starting new generator for 6 videos (1398 frames) in G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\output/breastsplay9/vace-work
!!! Exception during processing !!! Batched loader produced no frames
Traceback (most recent call last):
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 534, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 334, in get_output_data
return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 171, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 308, in asyncmap_node_over_list
await process_inputs(input_dict, i)
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-wan-vace-prep\load_videos_from_folder.py", line 58, in load_videos
return self._load_batched(video_files, folder_path, debug, meta_batch, unique_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\G SSD AI\ComfyUI_windows_portable\comfyuiqwen\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-wan-vace-prep\load_videos_from_folder.py", line 125, in loadbatched
raise RuntimeError("Batched loader produced no frames")
RuntimeError: Batched loader produced no frames
Prompt executed in 00:16:41
@celestialnovel2686 I can't explain what I see in your logs. It appears that after Meta Batch Manager begins iterating over the work files, something happens that causes the full workflow to run again. This is incorrect behavior and eventually leads to failure when the incorrect third iteration gets batck to Meta Batch Manager. As I said, I can't explain what causes this.
The best advice I have is
- if you have any extensions installed that manipulate the ComfyUI queue, try disabling them
- try running with a newly-downloaded copy of the workflow
- try running with smaller input videos. 465 frames is not unreasonably large, but typical Wan clips are 81 frames. I don't know why long videos would cause a problem, but it's worth a try. I didn't test with very large videos.
- try to run without the Meta Batch Manager node, or with different batch size settings. You may encounter OOM, but your errors do seem to be centered on Meta Batch Manager
Good luck! I am out of ideas.