This project is a node-based implementation for video generation using the Wan2.1 model, with a focus on start and end frame guidance. The source code is a modification of Kijai's nodes code, so for model download and installation instructions, please refer to ComfyUI-WanVideoWrapper. This project specifically adds the functionality of start and end frame guided video generation.
The nodes support Wan2.1 models in both 720P and 480P versions. It is recommended to generate videos with a frame count of 25 or higher, as a lower frame count may affect the consistency of character identity.
Currently, the start and end frame video generation approach is in its early stages. It primarily implements the start and end frame video generation functionality at the code level and does not yet involve model or LoRA fine-tuning, which is planned for future work. Additionally, incorporating end frame guidance in Image-to-Video (I2V) seems to degrade video generation quality, which is another area for future improvement.
Github: raindrop313/ComfyUI-WanVideoStartEndFrames: Start and end frames video generation nodes based on the modified Kijai version Wan2.1 nodes
START FRAME:
END FREAME:
Description
FAQ
Comments (17)
I have updated Comfy UI and ComfyUI-WanVideoWrapper, but when I run the json, it tells me that all the "_2frames" nodes are missing. ... I don't know what I did wrong. Is there anything else to install?
go to raindrop313's github and download the project as zip and extract into custom_nodes folder in comfy.
don't forget to do the pip install -r requirements.txt
@princenazjak Thx ^.^)v ... Now it seems to "accept" but nothing is generated... it blocks on the Sampler... maybe my 4080S is too old for this process
@pernet_jm813 yea, mine just crashes as well. figured the workflow wasn't set up for my old 3080
@pernet_jm813 I haven't actually run it on 4080s, but it seems that increasing the blockswap and using the fp8 quantization model might solve your problem.
@872409853830 Thank you for answering me, but yes, I tried everything, in 16... 8... changing the BlockSwap and unplugging certain panels like the TeaCach or other to try to lighten it, but no... it always goes into "out of memory". Impossible to try ... sorry.
Any chance for a native version of this?
Yes, I've been quite busy lately, but I will continue to provide updates.
Splendid job, man. Kijai's asked to merge this mod to his repo, I hope you'd be willing.
Yes, I'd be happy to
3060 is not available, only OOM
Try increasing the 'blockswap' parameter, and if that doesn't work, try using the fp8 quantization model. In theory, as long as your card supports the I2V function of WAN2.1, you should be able to run this workflow.
Thank you for your reply, because of a friend's reminder, close the block, and use VRAM management, it was successful, 10steps took 30 minutes
This could be the holy grail for longer videos without the lack of quality.
For some reason it works faster than endframe in WanVideoWrapper, nice.
Also, if you reduce end node's weight, you can make a generation more open-ended, but for me it got image dimmer, so I added a color match node.
Can GGUF model be used in this workflow?
Unfortunately, the current workflow does not support GGUF, and I am not very familiar with its implementation. At the moment, I want to first add new features to the Wan video generation guided by the start and end frames, including intermediate frame guidance and model training. GGUF might be a long-term task for me.