CivArchive
    Wan 2.2 Painter Long Video with Sound! - v1.0 Low VRAM basic
    NSFW
    Preview 113400657


    Note: If you have ANY issues with nodes not downloading, read the notes or reach out. There's nothing that special about any of them that aren't core modules.

    ⛔⚠️🛑✋ Read the notes completly before using. Most common install and node problems are listed in the directions

    This is a High Vram workflow. If you have less than 12Gb VRAM, I encourage you to try my Low VRAM Workflow: https://civarchive.com/models/2236782/wan-22-1080p-on-low-vram-up-to-4k

    也有中文说明

    I deceided to launch the seperatly from my Base Wan 2.2 as it is very different and is not for low VRAM

    All in 1 workflow:

    • Uses painter nodes to extend the video to any length you want.

    • Easy resize of images

    • Lora Loaders

    • Seed VR2

    • MMAudio

    • Rife VFI Interpolation

    • Thorough notes and links.

    • 也有中文说明


    Instagram: https://www.instagram.com/synth.studio.models/

    Buy me a☕ https://ko-fi.com/lonecatone

    This represents hundreds of hours of work. If you enjoy it, please 👍like, 💬 comment , and feel free to ⚡tip 😉

    Description

    FAQ

    Comments (38)

    GlowingGuardianGirlDec 10, 2025
    CivitAI

    Define "LOW" 😅 how low are we speaking? 12gb? 10? 8?

    lonecatone23
    Author
    Dec 10, 2025

    Welp... You could techinally run a long loop of 3 second videos, so pretty low. I ran it off of my laptop that has an rtx5060 and it was cooking at a little over a minute per second of video for 480p. That's only 8gb

    jzaamirDec 11, 2025
    CivitAI

    Thanks for sharing, Quick question, If I have 24GB Vram what settings do I need to change to make it go faster? RTX 5090. I usually get OOM using other workflows.

    lonecatone23
    Author
    Dec 11, 2025

    So generation isssue can be resolved three ways. Unfortunatly non of them is preferable. Here's the best way with low VRAM

    AN RTX 5090 is 24GB VRAM, right? You have something set up wrong if you are getting OOM errors. That's hefty and should run 720p all day long.

    1. The single biggest driving factor is the size of the image. Run it at smaller sizes, then upscale with an upscaler like Topaz

    2. Shorten the video length to 3 seconds. You can easily do this here.

    3. Use GGUF models, but use the RIGHT model that's quantized for your computer. Ask ChatGPT which one to use. That will significantly help.


    Hopefully that helps

    wallmonster151Dec 11, 2025

    Not sure abut your settings. If you have not done so you could consider changing your CUDA Sysmen Fallback to offload some of the load to DRAM. I have 192gb of DRAM and another 2x on a swap drive along with my 5090. After enabling those I have not encountered another OOM. Seemed to take a couple of restarts for the Sysmen setting to fully kick in but I regularly see GPU memory usage at close to 50gb without an error.

    jzaamirDec 12, 2025

    @lonecatone23 Thanks for the help! will try all of these and see which one works. Cheers!

    jzaamirDec 12, 2025

    @wallmonster151 Honestly! This is too deep for tiny head. :D But will ask ChatGPT to explain it further in detail with a follow-along guide. Thanks.  

    lonecatone23
    Author
    Dec 12, 2025

    @jzaamir Bro, when I started, every other question was "how does this work'. It's a lot to take in. Just experiment. You'll find out its a hell of a lot of fun

    sdktertiaire2Dec 11, 2025
    CivitAI

    Hello, thanks. Really good workflow.

    lonecatone23
    Author
    Dec 11, 2025

    Thanks!

    DylfinDec 11, 2025
    CivitAI

    Yeah, I made a similar workflow too. But I had the same problems like others - model lost the initial appearance of the girl, and it's hard to force the model to follow the prompt.

    lonecatone23
    Author
    Dec 11, 2025

    I actually came up with a (partial) solution for that. Taggers help somewhat, but I redid it so that there is a space to put a character prompt. It solved it quite a bit.

    DylfinDec 12, 2025

    @lonecatone23 I thought about 2 variants.

    The first variant to process last frame in image-to-image to keep the object consistency.
    The second is to create the Lora. For that, use initial image to create a several videos/images from the different angles and use them to create Lora. But this requires quite powerful setup and a lot of time.
    Now I think that it will probably be easier to create images for Lora from text-to-image, we have the right workflows now, as far as I know.
    And we can use the old movie trick between short videos - a blank screen. But in this case everything will be quite fast.

    lonecatone23
    Author
    Dec 12, 2025· 1 reaction

    @Dylfin Yeah, I'd use other generators. I like this because it runs on my 8gb laptop. I can also faceswap.

    It's for low VRAM and does a pretty bangup job for what it is.

    gumpbubba721291Dec 13, 2025· 1 reaction

    Ok... so. I got obsessed with this problem on trying to solve it for awhile. My conclusion was that a perfect solution doesn't exist for true I2V yet. Faceswapper tech for anything high resolution isn't open source anymore. The tech isn't there yet. HOWEVER, a great solution does exist for VACE, and you can sort of resolve this issue using VACE as an I2V style in a flow based around https://github.com/bbaudio-2025/ComfyUI-SuperUltimateVaceTools

    I never released the workflow I made with it, because I still need to clean it up and tighten up some params, but basically in the workflow with SuperUltimateVaceTools, you can set up the parameters to be a trade off of how much it references the original image per iteration vs the last frame of the iteration. In this method, if you have an I2V that is a repeated action with a similar setting throughout (i.e. a titfuck, since they're not changing background or walking around), you can create a setup with minimal loss. For instance, maybe you give an 80/20 blend. The nodes also allow image drops at specific points, so if you have an image you want it to go to at some point, you can specify the frame it should become that image.

    Something I want to also understand is some of the technical details behind I2V generation to see if it's possible to only have VAE decoding done once at the end, eliminating VAE decoding loss per iteration, but I'm still researching that and the coding required.

    When I tried making a workflow myself before I knew about the SuperUltimateVaceTools stuff, I had pretty good success as well with the standard iterative workflow (taking the last image of previous generation -> first image of new generation), but on the new image, I masked the subject out and replaced it with the subject of the original image. Then I created a transition between the previous iteration to the new iteration via a full grey mask. It created a seamless "infinite length" video without cooking the subject, only the background. That workflow was a nightmare to make. I recommend checking outSuperUltimateVaceTools- makes life MUCH easier.

    lonecatone23
    Author
    Dec 13, 2025

    @gumpbubba721291 Bro. Send me teh workflow. I'm super interested in deconstructing it. I've never worked with VACE

    gumpbubba721291Dec 13, 2025

    @lonecatone23 https://civitai.com/models/1913485?modelVersionId=2169439 This is the super convoluted monstrocity I made at one point lmao. For the simpler one using SuperUltimateVaceTools I'll share it once I have it set up, but I would recommend checking out their example workflows https://github.com/bbaudio-2025/ComfyUI-SuperUltimateVaceTools/tree/main/workflows as it forms a good base line for it.

    lonecatone23
    Author
    Dec 13, 2025

    @gumpbubba721291 I don't care if it works or not, I already owe you for the into video 🤣🤣

    lonecatone23
    Author
    Dec 13, 2025

    @gumpbubba721291 Have you messed with SAM3 yet? Try it. https://ai.meta.com/sam3/

    ChristianArcDec 12, 2025
    CivitAI

    Very promising thank you !!!
    the combined film edits all the 81 images films but the last frame is repeated with the follow film 1st frame.
    how avoid this??
    the quality falls down at the 4th generation for me

    lonecatone23
    Author
    Dec 12, 2025

    That's wierd. The seed generator is hooked up to a setnode called "Seed" it is in on the right side of the User input area in the yellow getset node area (5th from bottom), and in each generation module getset node area (4th from bottom). Make sure the values in each of those nodes match. Sometimes they break when copying and pasting or uploading.

    MugenManDec 21, 2025
    CivitAI

    What can be edited to increase the speed for video cards with more VRAM?

    It works faster in other workflows, but here, visually and because of the normal functionality of increasing the video duration, I was interested in this one in particular

    lonecatone23
    Author
    Dec 21, 2025· 1 reaction

    Turn off NAG or make sure that Sage ATTn is doing it's thing. I've also found that the difference between checkpints (I'm not talking fp8/fp16/ gguf, but different versions of the same) massively affects what you generate. I can generate about a second in a minute on an 8gb card.

    icuzzJan 26, 2026
    CivitAI

    I'm having trouble with this. Does this still work, or does it have problems with latest comfy? I tried with GGUF model. Gives me errors, I have every extension installed in comfy.

    lonecatone23
    Author
    Jan 26, 2026

    I haven;t gone through this one, but both Wan and ComfyUI updated since I made it, so I wouldn't be surprised. Are you using Desktop or Portable so I can know what to look for?

    icuzzJan 26, 2026

    @lonecatone23 I'm using the desktop, comfy is the latest version.

    lonecatone23
    Author
    Jan 26, 2026

    @icuzz Yeah, desktop sucks. I ended up switching over. I'm not surprised.

    Try updating all from the manager

    icuzzJan 26, 2026

    @lonecatone23 Everything is updated, anyway if it works for you then I have something not working and maybe can figure it out.

    lonecatone23
    Author
    Jan 26, 2026· 1 reaction

    @icuzz It runs for me.

    icuzzJan 27, 2026

    @lonecatone23 Well now it works, I actually did not do anything :D Well some nodes did get updates between the last try (for me yesterday) and today. Thx for help and workflow :) Seems great.

    lonecatone23
    Author
    Jan 27, 2026

    @icuzz I haven;t updated this is a while. I have a better version

    Try this IF you have high VRAm (16gb) otherwise it takes forever.

    https://civitai.com/models/2275970/wan-22-svi-seamless-transition-workflow-with-seed-vr2-upscaling-and-interpolation-works-on-low-vram

    icuzzJan 27, 2026

    @lonecatone23 I have only 3080 with 10 gigs of VRAM so, I'll probably won't be using it :(

    tont56643Feb 10, 2026
    CivitAI

    Some troubles here, perhaps because of Python 3.12 + PyTorch 2.10 + cu128? Triton seems to be internal with cu128 and that worksheet cannot find it.

    tont56643Feb 10, 2026

    And just commenting myself: error when trying to run that workflow is

    PathchSageAttentionKJ

    No module named 'triton'

    lonecatone23
    Author
    Feb 10, 2026

    @tont56643 you need to install Sage attention. You can find videos on how to do it. If you have issues, I can walk you through it later

    tont56643Feb 11, 2026

    @lonecatone23 thanks for telling me correct direction, now it works - first test already gave results.

    lonecatone23
    Author
    Feb 11, 2026

    @tont56643 Glad to hear it. I was at work. Sorry I couldnt help

    lonecatone23
    Author
    Feb 11, 2026· 1 reaction

    @tont56643 I'm probably going to push out an update on tis inteh next week or so

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,198
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/10/2025
    Updated
    5/15/2026
    Deleted
    -