This workflow retrieves the last 15 frames of a video to create a logical sequence, then merges the sequence with your original video.
📂Files :
Recommendation :
>24 gb Vram: base or Q8_0
16 gb Vram: Q5_K_S
<12 gb Vram: Q4_K_S
For base version
VACE Model: wan2.1_vace_14B_fp8_e4m3fn.safetensors or wan2.1_vace_1.3B_fp16.safetensors
In models/diffusion_models
CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
in models/clip
For GGUF version
VACE Quant Model: Wan2.1-VACE-14B-QX_0.gguf
In models/diffusion_models
Quant CLIP: umt5-xxl-encoder-QX.gguf
in models/clip
VAE: wan_2.1_vae.safetensors
in models/vae
ANY upscale model (depreciated):
Realistic : RealESRGAN_x4plus.pth
Anime : RealESRGAN_x4plus_anime_6B.pth
in models/upscale_models
📦Custom Nodes :

Description
New version that uses VACE
FAQ
Comments (6)
thanks for this, but how do you install magcache? can't be downloaded from manager and it's not on your nodes auto installer.
Does this only work with VACE or we can use it with the basic WAN 2.1 model too?
It seems the RifleXRoPE node is missing a latent node from WanVaceToVideo.
Aside from that, it works very well.
I wonder if VACE is the reason it makes such a smooth transition.
Can you make one for Wan 2.2? <3
Would it be possible to make the 2.2 version?
Using this ends up with the just a video of full black. Any ideas what I'm doing wrong?






