CivArchive
    WAN 2.2 i2v workflow for LONG videos - SVI + GGUF + UPSCALING! - v2.0
    NSFW

    This is a clean and concise workflow for ComfyUI that allows you to generate longer videos by chaining together up to 4 separate clips.

    V3.0 now includes SVI for better consistency in longer videos!

    Note: Full instructions, advice, and links to models can be found in the note in the workflow itself.

    This workflow is based on my clean 2+2[+2] lightning workflow that can be found here:

    https://civarchive.com/models/2194801/simple-wan-22-i2v-60-fps-comfyui-workflow-4-steps-2-or-3-ksamplers-last-frame-option-gguf-and-upscaler

    You can use this workflow to generate a single WAN 2.2 video or you can have it chain up to 3 additional videos based on the final frame of the previous clip. In this way you can create longer videos, with distinct changes in action or framing without needing to generate the whole thing in one go. Simply fine tune the first clip with the others disabled until you are happy. Then don't change that clip and move on to clip 2. ComfyUI won't spend time regenerating the earlier clips each time you want to tune the next one!

    Please consider tagging this as a resource you used if you generate anything with it and upload it here. I'd love to see what you're making with the tool!

    Description

    Updated to include GGUF loaders, automatic steps calculation, and various other UI and usability improvements!

    Updated again to improve the upscaling behaviour (it will now upscale BEFORE interpolation, speeding things up a lot!)

    FAQ

    Comments (33)

    BoredWorkerJan 2, 2026
    CivitAI

    In v2 while using gguf is Load Model (High Noise) is empty i.e I dont have the wan2.2 safe tensor files Workflow is throwing error.

    Also an option to choose upscale first then rife or rife then upscale would be good

    TheFatController
    Author
    Jan 2, 2026

    Hey, thanks for the feedback. I'll take a look at the error without non-GGUF options. I would guess a quick fix is to simply bypass the .safetensors load nodes and possibly the GGUF switch.

    I will look at the ordering of RIFE/Upscale too, as there might be a better way to do it than right now.

    R_THUNDERSJan 2, 2026
    CivitAI

    Could you please set up a workflow on Running Hub and share the link?

    TheFatController
    Author
    Jan 2, 2026

    Hey, sorry - I've never used Running Hub as I just generate on my local machine.

    sp573Jan 3, 2026· 2 reactions
    CivitAI

    Took me a bit to figure everything out, reading the in workflow readme note on the far left would have made things a LOT easier, also reading that BEFORE I started clicking around and having to remember what I reset, lmao - that's on me, not OP

    Once I read that and figured out where the Lightning Lora's went, and set the correct step math, I stopped getting a blurry/unprocessed/no Low Pass image as my output of step Clip 1, which made 2/3 possible

    Pretty amazing overall, can't wait to really dig into it. Use OPs recommendation of getting one smooth clip working then continuing on to save yourself some system push, or brute force it like I did and eventually read and fix it, I'm a website comment not a cop.

    TheFatController
    Author
    Jan 3, 2026

    Glad you like it!

    For what it's worth, I literally just updated the download file for V2.0 to improve the upscaling behaviour so if you downloaded it more than an hour ago I'd recommend re-downloading it :)

    sp573Jan 4, 2026

    @TheFatController fair enough, I'll go back and take a look - I tagged you in a post but it wouldn't let me add it as a resource sadly :(

    The upscaling split I did was more so I could queue 4-5 of these while I go do something, review, and then upscale anything decent or rework as required, but also meant I could get through 10 of them overnight on a moderate 5080/32GB RAM setup.

    You mention in the internal doc generating one with the rest disabled, then continuing on and it won't regenerate - I think I may have been doing this wrong do you have any tips? It still runs through the first whole prompt, even if the output is the same with an unchanged seed, but the compute time was nearly identical. When I tried to bypass in Comfy Clip1 and continue on it did skip the step but the generated clip 2 was all noise/static, I also tried with bypassing clip 1 and unbypassing the outputs from the previous run, but same thing.

    Any pointers to help out there? Otherwise I like it, I've modified quite a bit for my flow but this has taught me a ton about this since I only started a week ago.

    TheFatController
    Author
    Jan 4, 2026

    @spammaplease573 No worries! I know the system is a bit weird when it comes to adding resources to an image. I often find that unless I add this workflow as the first resource, it won't let me add it later. And it also often won't let you add resources from the generation of the original input image (e.g. PONY checkpoints and Loras) as well as the WAN loras and checkpoints.

    I'm afraid I'm not quite sure why it's regenerating the whole thing though - this might be a ComfyUI setup thing though? By any chance have you used my txt2img workflow? There's another way to test it by generating an image using that with the "Face Detailer" disabled, then enabling the face detailer and running it again - this shouldn't regenerate the original image, just do the face detailer part.

    AzulAuthorityJan 3, 2026
    CivitAI

    How do I deal with the issue of different clips working at different speeds? All the same settings but the 2nd clip moves so much faster.

    TheFatController
    Author
    Jan 3, 2026· 1 reaction

    Does it use different loras? I've found some loras have a large effect on the speed of motion.

    There's a few things you can try. Firstly you can add speed prompts, but they are pretty limited in their effect. Secondly, check that the interpolation multiplier and frame rate is being set right for every clip. If it is, and you still have issues, you can set them independently for each clip. To do that you'd want to "disallow UE links" or something on the RIFE nodes and the video save nodes and then enter values manually for each clip to get the speeds to match up.

    Hope that helps!

    majin325551Jan 4, 2026
    CivitAI

    I was using the 1.0 version and for some reason it would always generate the entire series of clips each time. Not sure why, seed was fixed and nothing was changed but the prompt for the current step I was working on. If I was on step 3, it would generate step 1, 2, and 3 each time. The first 2 videos were always the same since nothing was changed.

    I'm going to give the 2.0 version a try and see if it works.

    TheFatController
    Author
    Jan 4, 2026

    That's odd. Were you using the upscaling? It might not have been set up properly in 1.0. There's also a chance something in the setup of ComfyUI or something could make that happen maybe? Although I'm afraid I'm not really sure. Hopefully 2.0 helps!

    majin325551Jan 4, 2026

    @TheFatController I was not using the upscaling, maybe that's why? I was doing a fp16, 1024x576, 32fps, 81 length, model shift 6, and 1x1x2 step. I thought it might have been because of the updated lightning lora but changing them didn't do anything.

    majin325551Jan 6, 2026

    @TheFatController Tested on both 1.0 and 2.0. I still get repeat generations for each clip. I reinstalled ComfyUI, installed the Manager and installed only the missing custom nodes. I still get repeat generations of each step. I have no clue why. My PC is AMD Ryzen 9 7950X, Radeon RX 7900 XTX 24g VRAM, 64g RAM

    TheFatController
    Author
    Jan 6, 2026

    @majin325551 I'm afraid I'm really not sure! I did some google digging and, as I thought, I got the following:

    "ComfyUI is designed to only rerun nodes in a workflow if a parameter in an upstream node has changed or if the seed is set to change. This behavior is a feature designed to save time and resources by using cached results for parts of the workflow that are identical to a previous run."

    You said your seed is fixed, but is it fixed everywhere and propagating properly? I.e. is the "seed" field in all the KSamplers greyed out and using the main seed option near the start of the workflow?

    Unless you've got a setting somewhere that tells ComfyUI to not cache the workflows (I don't know if this exists?) and it's regenerating the entire thing every time then there must be something (like the seed) changing every time you run it.

    majin325551Jan 7, 2026· 1 reaction

    @TheFatController I figured it out when you mentioned ComfyUI not to cache. I stopped using the cache because it would use a lot of resources, and I could make higher resolution generations with no cache. I'm fairly new to ComfyUI, so I'm not sure if it was really making a difference or not. My startup script has "python main.py --cache-none, --normalvram". Removing the --cache-none allowed the workflow to run properly.

    TheFatController
    Author
    Jan 9, 2026

    @majin325551 Aha! Glad you figured it out at least :)

    MidnightSanctumAIJan 6, 2026
    CivitAI

    Is there a version with this without the rgthree nodes? I had other workflows that worked for me a few days ago but now they don't. Even worst, everyone seems to be using these nodes, so I can't seem to find an i2v workflow that actually works properly anymore.

    TheFatController
    Author
    Jan 6, 2026

    Are you using the Nodes 2.0 in ComfyUI? I have these turned off and I have no issues with the rgthree nodes. At least in ComfyUI 0.3.76 that I'm running.

    Yes, I manually turned off the nodes 2.0, and rolled back the comfyui (not sure i went back to 3.76, I will double check that), and the fast group node and high n low lora stacks still down work. So I have been altering my old workflows to work without them, but i feel like something has been off since the switch.

    MCSizeMattersJan 27, 2026· 1 reaction
    CivitAI

    First of all, this is a great workflow for longer videos. Much more efficient than manually grabbing last frames and doing the whole thing manually.

    And, folks, be sure to read the instructions carefully!

    I do find that the third and fourth iterations have a noticeable drop in detail, do you have any suggestions to minimize this?

    TheFatController
    Author
    Jan 27, 2026· 1 reaction

    Hi, and thanks for the comment.

    Unfortunately I don't have much in the way of advice for the quality drop - it's just inherent in the whole "copy of a copy" thing. That being said, I am working on a version that uses the SVI loras and SVI video that "might" improve things.

    I'm not quite done testing with it yet, because I'm not yet sure if it's better, but if I can confirm it's better than I'll upload a new version.

    The only advice I can give really is that the more movement you have, the faster things degrade. This isn't that helpful though, as it's an animation! If you wanted things to stay still you wouldn't be using WAN!

    MCSizeMattersJan 27, 2026· 1 reaction

    @TheFatController I appreciate the reply, and kind of the issue I figured. I'll try and reduce some of the rate of change in motion and see how it goes. Like I said, your workflow makes it a lot easier.

    TheFatController
    Author
    Jan 28, 2026· 1 reaction

    @MCSizeMatters V3 with SVI is up now!

    MCSizeMattersJan 28, 2026· 1 reaction

    @TheFatController I saw! I just downloaded it and am mucking about right now. I really appreciate your workflows.

    MCSizeMattersJan 28, 2026

    V3 is really good. I got good results in the first pass no second step. Not quite posting quality yet.

    Couple of questions... Should I be using the SVI High Lora in the first step?

    If so, what should I be using in Step 2? Or should I be turning off the SVI High in step 1 if I'm using it in step 2?

    Any suggestions on the best place to be adding other loras into the workflow? High? Low? Step 2 or 3?

    This is a great workflow with good results right out o0f the box, another winner!

    TheFatController
    Author
    Jan 29, 2026· 1 reaction

    @MCSizeMatters Glad you like it so far.


    So I tend to only use 2 steps (so disable step 2) and run 3+3 steps high+low. So I use the SVI lora on both steps, at 1.0 strength. I am not an expert at all, but I usually add any other loras I want to use to both the high and low noise steps, but I often run about half the strength on the low noise.

    MCSizeMattersJan 29, 2026

    @TheFatController Interesting, thank you. I'll give that a try.

    kosmomanJan 31, 2026· 1 reaction

    One thing you can do is choose the last frame manually, many times the last frame of a generation is a frame that does not contain any/or that looses most of the original character or persons features. Sometimes you can find a frame a few frames earlier that holds more of those features or is sharper in general, using that frame instead of the randomly chosen last frame just because it is the last one can yield better results, since the actual last frame might be blurry and/or in an awkward phase of the facial expression due to the speed of the mothion, or motion blur etc... I am sure you can even automate this but that might be pretty complex. User Playtime-AI also created a Same-Face-Fix Lora that helps a little bit. https://huggingface.co/Playtime-AI/Wan2.2-Loras/resolve/main/Wan2.2%20-%20T2V%20-%20Same%20Face%20Fix%20v2-%20LOW%2014B.safetensors?download=true https://huggingface.co/Playtime-AI/Wan2.2-Loras/blob/main/Wan2.2%20-%20T2V%20-%20Same%20Face%20Fix%20v2-%20LOW%2014B.safetensors

    TheFatController
    Author
    Jan 31, 2026· 1 reaction

    @kosmoman That's a great idea, albeit quite time consuming. It's one I considered doing myself. However, with V3.0 of the workflow, the SVI loras and SVI video nodes actually don't select a single "last frame" any more.

    I believe the way it works is that you use one or more "anchor" frames, which in this workflow is the initial input image. After that, each successive clip uses the full stack of generated frames from the previous one as guidance for the next one ("prev_samples" in the WanImageToVideoSVIPro node). The anchor is still referred to, throughout, so you keep some of the character consistency even if they move out and then back into frame.

    MCSizeMattersFeb 6, 2026

    @TheFatController I have to say, I love the speed of the 2 step process, but moving to the 3 step makes a massive improvement in quality and consistency through the whole video. (now that I figured out how to get 3-step working. Protip: Read the instructions!)

    TheFatController
    Author
    Feb 7, 2026

    @MCSizeMatters Glad you still like the workflow. I'd love to see some of your generations if you upload any here - just tag this workflow in the resources and it'll show up at the bottom so I (and everyone!) can see it :)

    I haven't played much with the 3-step recently because I was getting similar results with the 2-step, but I'm interested in the settings/steps/loras you use to get an improvement.

    MCSizeMattersApr 3, 2026· 1 reaction

    Still really liking your workflow, now that my system seems to be back in business. But with my switch to the 5060ti Blackwell architecture I made one change to the process. I replaced the upscaling with the Blackwell native RTX Upscaler. Super high quality, doesn't introduce noise, and fast. You need a card that supports Blackwell, but if you have it, it's worth trying.

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    2,364
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/31/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    wan22I2vWorkflowForLONGVideos_v20.zip

    Mirrors

    wan22I2vWorkflowForLONGVideos_v20.zip