v1.2 finally out.
all instructions + custom node info are included in the workflow readme section.
v1.1 is out.
Some minor improvement on face morphing/degradation issue.
Minor improvements on scene transition.
Please get: https://github.com/ltdrdata/was-node-suite-comfyui and https://github.com/wildminder/ComfyUI-KEEP for v1.1
As requested by many of you, here is my video extension workflow.
With this you can infinitely extend your videos to your liking.
Have fun and join my discord to contact me directly.
Custom Nodes:
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/cubiq/ComfyUI_essentials
https://github.com/banodoco/steerable-motion
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/willmiao/ComfyUI-Lora-Manager
https://github.com/Smirnov75/ComfyUI-mxToolkit
Description
Face consistency fix for video extension.
Revamp UI
FAQ
Comments (13)
Great workflow. Could you give some guidance on length and FPS, i'm getting either super fast movement for the first part of the video then it slows drastically down, or normal with super slow motion for the duration. I'm clearly not understanding the length and FPS settings as I'm also generating 30 second videos when trying to set 10 seconds
The interpolation option in this WF adds frames, so if your input is 16fps then you double the frames with interpolation, you would want to change fps to 32 for the same playback speed.
You're struggling with life if in 2026, with AI models (of all types, ) evoliving week by week at an exponential level, and you still have to ask in comments section why you have an error. Imagine if AI could do more than just make images....
I had previously reported a color issue (which gets more exaggerated if using Lightning LoRAs + > 1.0 cfg). I've found a reasonable (not perfect!) solution for this. There is an "Image Color Match" node in the popular ComfyUI-Easy-Use pack, which accepts reference image and target image. It's surprisingly effective. Depending on where it's being used (middle of processing, or color-correcting a rendered video from Load Video, etc) may need to use "Rebatch Images" / "Image List To Image Batch" for feeding inputs in / collecting results to new video.
As I said this is not perfect but the results are very impressive.
This is so great and easy to use, yet I don't know why I got 2 issues below which bugging me.
1. When using the workflow, it doesn't crop out the face from the video, even if the override set as false.
2. The extended part becomes very blurry.
Anyone can advise on above?
I did find a workaround for the first issue. Unpin the "Load Override Image" and put it to the side. Underneath there should be a node "IMAGE". From your Node Library add and "easy getNode" and set its value to "FIRST_FRAME". Plug that into the IMAGE node. It should work now. If you still want to use the Override from time to time, you'll have to change it back or figure out how to do it depending on the override parameter.
cool wf. But I don't get where to put the length of the video (the extended part) or where to put in the number for the start%.
One of the best WFs ever!!! simple and effective. But the Ip-Adapter doesn't seem to have any effect , I mean nothing changes when I switch the BOOLEAN value to "true", even with the face weight =5. The face of output is no any diffrence from the result generated by bypassing the Ip-Adapter node. Could you pls kindly tell me does it work in your cases?
I have exactly the same issue. Any hints here?
There aren't actually instructions how to use the workflow. You just put instructions on installing the custom nodes in github. If anyone got this workflow to extend videos I'd like to know which node sets the length of the extended part
The fuck is a node?
Hi, I just downloaded it. Do you have any idea how to use it? I don't get any errors, but I get horrible results. I also have a lot of LoRAs, but I can't apply them.