Based on the first frame, intermediate frame, and last frame images input by the user, generate a video that matches the content of the first frame, intermediate frame, and last frame.
v2.1 update content: Parameter tuning after node update
v2.0 update content: Alleviate the abnormal flickering issue at the middle frame connection points of videos.
You can click the link below to try it out directly without downloading the model. If the results are good, you can deploy it locally.
Online experience URL:https://www.runninghub.ai/ai-detail/1988614804149592066/?inviteCode=rh-v1315
More workflow parameter settings, please refer to my settings on rh:v2.1:https://www.runninghub.ai/post/1988545892204593153/?inviteCode=rh-v1315
intermediate frame node URL:https://github.com/wallen0322/ComfyUI-Wan22FMLF
Sign up via this link claim 1000 RH Coins and generate tons of images/videos for FREE! 🚀
My AI models/apps are on RH now! Try them out & support my work — every click counts. Thank you! 🚀
More of my works and workflows are on RunningHub—a treasure trove for AI image & video creation! Hundreds of fascinating and practical AI apps are shared daily by global ComfyUI developers here, perfect for both fun and productivity.
Description
FAQ
Comments (2)
Wan2.1 VACE allows any number of frames to be inserted by the user. Wan2.2 has notably proven to be too hard to hack to the same degree of control freedom. Wan2.2 first-last seems to be the best we have for that model- anything more ambitious and using wan2.1 is likely to be more successful!
have a link to a 2.1 workflow that gets good result?