Funcon&InP
ENG:
If your VRAM is low, I recommend generating step-by-step. (My 12GB setup can only handle one-shot generation with SD1.5. For Pony & Flux, you’ll need more than 12GB, or else generate gradually, turn off switches, and move to the next workflow.) Funcon and Inp are two separate workflows—don’t run them at the same time. When using Funcontrol with images, it’ll do an image-to-image pass to match the first frame video and create a new image. If you hit an OOM error, generate step-by-step or unload the model and try again. The workflow includes both English and Chinese instructions.
Reward-LoRAs (MPS)
https://huggingface.co/alibaba-pai/Wan2.1-Fun-Reward-LoRAs/tree/main
PS:
1.3B:funcontrol,
https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP/tree/main
inp
https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-Control/tree/main
GGUF14b:
funcontrol
https://huggingface.co/city96/Wan2.1-Fun-14B-Control-gguf/tree/main
inp
https://huggingface.co/city96/Wan2.1-Fun-14B-InP-gguf/tree/main
中文:
显存不足推荐逐步生成(我自己12G如果要使用一次性生成只能使用SD1.5,所以要使用pony&flux除非有大显存,所以推荐逐步生成后关闭开关,在生成下一个工作流),funcon和inp是2个不同的工作流,请不要同时开启,在使用图片时为了尽量符合首帧视频,会进行一次图生图来创新新图像,如果OOM了请逐步生成或者卸载模型后重新生成,工作流里附带英文和中文说明,下面是下载连接
Reward-LoRAs (MPS)
https://huggingface.co/alibaba-pai/Wan2.1-Fun-Reward-LoRAs/tree/main
1.3B:
funcontrol,
https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP/tree/main
inp
https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-Control/tree/main
GGUF14b:
funcontrol
https://huggingface.co/city96/Wan2.1-Fun-14B-Control-gguf/tree/main
inp
https://huggingface.co/city96/Wan2.1-Fun-14B-InP-gguf/tree/main
This is my personal wan2 workflow. Because my device is an RTX 3060 12GB, I have to use GGUF to generate. However, this workflow has nothing special, it's just a basic process. If you already have other, you can ignore it.
use note:
1、Note that you must use the native T5 model, link
2、You must update your ComfyUI and GGUF to the latest versions
GGUF downlaod t2v_link, i2v_link
If you like this model, please 👍 it and leave a review! Also, feel free to give me a ⚡, it would be greatly appreciated.If you don't like it, still let me know why so that I can improve!
※ I using Forge or Comyfui to gen. If your results don't match mine exactly, this may explain why.My LORA may not work on certain checkpoints. If that's the case, please switch to a different checkpoint.
Description
FAQ
Comments (6)
how to make video longer than 3s? help me
If you want to increase the video duration, just set the length (it will consume more VRAM).
@TTangSlgy i tried but it only repeats the first 3s
@ktsminhtan771 Since WAN2.1 officially defaults to a length of 81, which is about 5 seconds, you can try extending the video by using the last frame of the previous generation as a reference image for the next generation.
I haven't added this to the workflow yet, but I can include it if needed.
Basic Process:
Keep all sampling parameters unchanged.
Add a new Get Image Count node, connect it to Math Expression, set a → a-1, then link it to ImageFromBatch to extract the last frame and use it as a reference for the second generation. You also need to specify a prompt for the second generation.
Merge the batch images from both generations into a new video. Repeat the process to extend the duration further.
@TTangSlgy it seems wan2.1 has updated fix default 5s, can you update it?
@ktsminhtan771 I haven’t noticed any updates to the WAN2.1 model. The method I mentioned earlier is for increasing the video length even after you've already modified it in the settings.
With this method, you can extend the video by 10s, 15s, or more—until it reaches your desired length.
Example: https://www.youtube.com/watch?v=HX2BCBXSiKI
Alternatively, you can use Kijai’s workflow to modify num frames.
