CivArchive

    Daxamur's Wan 2.2 Workflows

    If you'd like to support me, check out my Patreon!
    DM to inquire about custom projects.


    -NEWS-

    Responses are delayed as I'm heads down working on getting my next release ready for you all - once released, responses will go back to normal!

    v1.2.1 Out Now! - Update to DaxNodes via ComfyUI manager required

    • FLF2V added with GGUF support - no new models required

    • Fixed ability to independently disabled / enable upscaling and interpolation

    • Dedicated resolution picker nodes, added auto-resizing functionality from v1.3.1 to I2V and FLF2V


    DaxNodes now available via ComfyUI Manager, no more git clone required!


    Current Tracked Bugs:

    • KJNodes Get / Set reporting a missing error for some users, if this happens - ensure you download the latest version of DaxNodes from ComfyUI manager, and re-import the workflow! - In progress


    If you see a "FileNotFoundError ([WinError 2] The system cannot find the file specified.)" from VideoSave or other video-related nodes, FFmpeg is missing or not in your system PATH.

    • Setup (Full Version Required):

    • Download the full FFmpeg build

    • Extract it to a stable location (e.g., C:\ffmpeg).

    • Add C:\ffmpeg\bin to your system PATH:

    • Open Edit the system environment variables -> Environment Variables....

    • Under System variables, select Path -> Edit....

    • Click New and add C:\ffmpeg\bin.

    • Save and exit.

    Restart ComfyUI (and your terminal/command prompt).

    After this, everything should work!


    v1.3.1 Features

    Segment-Based Prompting

    • Persistent Positive Prompt: Keeps consistent details across the entire video (ie. “A woman with green eyes and brown hair in her warmly lit bedroom”).

    • Segment Positive Prompts: Separated with +, one per segment length (ie. “She is writing in a journal + She closes the journal and stands up + She walks away”).

    • Gives you far more control in long-form videos and helps reduce WAN’s tendency to render weird camera movements or jutters on I2V start.

    Endless-Style Looping

    • Segments can chain "infinitely" (I capped the node at 9999), creating effectively endless loops.

    • The Video Execution ID manages overwrites and stitching - just increment the ID as you generate new sequences.

    Streaming RIFE VFI + Upscaling

    • Tweaked RIFE VFI and upscaling now stream frames instead of holding entire sequences in VRAM/RAM.

    • Allows much longer videos, smoother interpolation, and sharper upscales without OOM errors.

    Face Detection & Drift Correction

    • Intelligent Mediapipe face frame detection locks focus on characters.

    • Drift correction ensures the final video runs at least as long as requested - but instead of cutting mid-generation, it will add full extra segments until the target framecount is met or exceeded.

    • This way, no generated frames are wasted, and you always end up with smooth, complete segments.

    • Fully toggleable, with adjustable frame look-back settings.

    Resolution Handling

    • T2V: Standard WAN resolution presets with optional overrides.

    • I2V: Input image scales to WAN-native resolutions, preserving aspect ratio. “Native” passthrough supported.

    QoL & Management

    • Toggle upscaling/interpolation independently.

    • Temp file output organized by execution ID - clear /output/.tmp/ periodically to save space.

    Looking Ahead

    This workflow is still experimental , future versions will expand on segment control, smarter handling of motion/camera behavior, more adaptive face tracking, and even integration of audio/video for cinematic sequences. Big things are coming!


    Notes

    I've done my best to place most nodes that you'd want to configure at the lower portion of the flow (roughly) sequentially, while most of the operational / backend stuff sits at the top. Nodes have been labeled according to their function as clearly as possible.

    Beyond that;

    • NAG Attention is in use, so it is recommended to leave the CFG set to 1.

    • The sampler and scheduler are set to uni_pc // simple by default as I find this is the best balance of speed and quality. (1.1> Only) If you don't mind waiting (a lot, in my experience) longer for some slightly better results, then I'd recommend res_3s // bong_tangent from the RES4LYF custom node.

    • I have set the default number of steps to 8 (4 steps per sampler) as opposed to 4, as here is where I see the most significant quality / time tradeoff - but this is really up to your preference.

    • This flow will save finished videos to ComfyUI/output/WAN/<T2V|T2I|I2V>/ by default.

    I2V

    • The custom node flow2-wan-video will cause a conflict with the Wan image to video node and must be removed to work. I have found that this node does not get completely removed from the custom_nodes folder when removing via the ComfyUI manager, so this must be deleted manually.

    GGUF

    • All models used with the GGUF versions of the flows are the same with the exception of the base high and low noise model. You will need to determine which GGUF quant best fits your system, and then set the correct model in each respective Load WAN 2.2 GGUF node accordingly. As a rule of thumb, ideally your GGUF model should fit within your VRAM with a few GB to spare.

    • The examples for the GGUF flows were created using the Q6_K quant of WAN 2.2 I2V and T2V.

    • The WAN 2.2 GGUF quants tested with this flow come from the following locations on huggingface;

    MMAUDIO

    • To set up MMAUDIO, you must download the MMAUDIO models below, create an "mmaudio" folder in your models directory (ComfyUI/models/mmaudio), and place every mmaudio model downloaded into this folder (even apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors).

    Block Swap Flows

    • Being discontinued as I have found that the native ComfyUI memory swapping conserves more memory and slows down the process less in my testing. If you receive OOM with the base v1.2 flows, I'd recommend trying out the GGUF versions!

    Triton and SageAttention Issues

    • The most frequent issues I see users encounter are related to the installation of Triton and SageAttention - and while I'm happy to help out as much as I can, I am but one man and can't always get to everyone in a reasonable time. Luckily, @CRAZYAI4U has pointed me to Stability Matrix which can auto-deploy ComfyUI and has a dedicated script for installing Triton and SageAttention.

    • You will first need to download Stability Matrix from their repository, and download ComfyUI via their hub. Once ComfyUI has been deployed via the hub, click the three horizontal dots to the top left of the ComfyUI instance's entry, select "Package Commands" and then "Install Triton and SageAttention". Once complete, you should be able to import the flow, install any missing dependencies via ComfyUI manager, drop in your models and start generating!

    • Will spin up a dedicated article with screenshots on this soon.

    Models Used

    T2V (Text to Video)

    I2V (Image to Video)

    MMAUDIO

    Non-Native Custom_Nodes Used

    Description

    • Added easy Upscale and Interpolation bypassing.

    FAQ

    Comments (35)

    Seeker360Aug 12, 2025· 2 reactions
    CivitAI

    Fantastic workflow - very clean and precise, like all good workflows should be! The only issue I'm currently having is that my videos are generating with an extreme slow-motion on them - by the time they finish the interpolation stage, they're still running very slowly. Any ideas?

    Daxamur
    Author
    Aug 12, 2025· 1 reaction

    So, there's two main causes for this in these flows;

    1. The speedup lora itself can cause this (can make it better by lowering the lightx2v lora strength on the high model, but this comes at the cost of some clarity in my experience (0.45 seems to be a good middle ground).

    2. I have the default FPS set to 16 (WAN 2.1's native framerate) as I find that it often is better aligned to the lightx2v lora (and marginally reduces generation time by nature of generating fewer frames), you can bump this up to 24 (WAN 2.2's native framerate) to increase the playback speed and overall frame number.

    Seeker360Aug 12, 2025· 1 reaction

    Thanks - that's really helpful! I've been using the workflow quite a bit and I have to say, it's brilliant!

    passhornet5266570Aug 12, 2025· 4 reactions
    CivitAI

    You should try 2xLiveActionV1_Span upscaler. I'm getting similar results to Ersgan but much faster.

    Daxamur
    Author
    Aug 12, 2025

    Thanks for the tip, I'll definitely check it out!

    DK7Aug 13, 2025· 1 reaction

    Care to explain more? Where can we get it? Can we use it in this workflow, do you have workflow of your own? IF so please share it.

    Daxamur
    Author
    Aug 13, 2025

    DK7 It, and most other upscalers people mentions can typically be found here: https://openmodeldb.info/ - the one @passhornet5266570 mentioned is here at this link: https://openmodeldb.info/models/2x-LiveActionV1-SPAN

    passhornet5266570Aug 13, 2025

    DK7 Daxamur has got you on the where. Just put it upscale_models folder (ComfyUI/models/upscale_models). Dax's wf has default ersgan. Just change that to any other you'd like to try. Remacri, NMKD, are a couple of others to try, too, but they are 4x upscalers.

    I'm just trying workflows from other creators like Dax since I'm fairly new to Wan.

    ramnnvAug 12, 2025· 3 reactions
    CivitAI

    The only thing I don't like about your workflow is that it takes a long time to regenerate a video if you modify the loras or their strength. Is there a way to fix this? It's very annoying and not happend to me with other workflows. It takes almost the same time when i start comfy, load the models, and start generating the first video. Thank you for your work and for sharing it.

    ramnnvAug 12, 2025· 1 reaction

    I mean:

    When i generate the first video after open comfyui it takes like 400s
    When i generate my second video, third, etc... it takes 60s
    When i modify anything about the loras, it takes 350s to generate next video.

    It not happen with other workflows.

    Daxamur
    Author
    Aug 12, 2025

    ramnnv I'll take a look and see if I get the same behavior on my end real quick

    Daxamur
    Author
    Aug 13, 2025

    ramnnv So, it doesn't look like I can replicate this in my environment - if you can post / dm your specs along with a pastebin link to an export of the flow with the loras you're using I can take a look

    mrazvanalexAug 13, 2025· 1 reaction

    ramnnv I feel like this is because you are loading new models? (the loras).

    lug_LAug 13, 2025· 3 reactions
    CivitAI

    Hello, thanks for sharing your amazing workflow, but I have a problem — my RTX 3080 10GB can’t handle 720p. The VRAM usage goes up to 98% and it stops progressing 😢, so I have to set it to 528x960, which stays around 90%–96% VRAM.

    But here’s the strange part: I’ve noticed that with the 'scaled_KJ' model from Kijai, if you lower it from 720p (at least for me), the generated video looks blurry when there’s a lot of hand movement or similar actions. I don’t know why.

    Now, if I use another model, for example GGUF Q8_0, I can use it without seeing that blurriness, but I can’t achieve the same quality as your videos. How can I get that quality with a lower resolution?

    I mainly notice the issue in the eyes — in some frames, they turn blurry. Is it possible to get your level of quality with a low resolution like mine, or do I absolutely need 720p to achieve it?

    I’m leaving a sample video below so you can see how it looks with the GGUF Q8_0 model, because with scaled_KJ it looks very blurry at my resolution 😭.

    Daxamur
    Author
    Aug 13, 2025· 3 reactions

    The resolution is definitely a huge factor in the quality of the final video - it might be worth trying something closer to a 16:9 aspect ratio like 540x960 or 480x854, but I don't know if it will help much.

    I have had plans to drop a block swap version of this flow to help with the memory usage, which should fix your issues - I'd imagine I could have it ready here shortly

    lug_LAug 13, 2025· 1 reaction

    Daxamur Thanks, I’ll try it when you publish it, although I’ve never been able to go above that resolution. I’ve tried in many workflows but couldn’t reach 720p. Anyway, I’ve already published a video — I like how it turned out, even though I’m not a fan of upscales, but I do like how it looks with your workflow. ❤️

    Daxamur
    Author
    Aug 13, 2025· 1 reaction

    lug_L Anytime! T2V + Block Swap is up - I was having some issues with block swapping while testing I2V, so that one will require some more work.

    lug_LAug 13, 2025

    Daxamur Thanks, there’s no rush. I’ll wait patiently for you to set it up properly. Do you think you could manage to use a higher resolution? I’ve tried with other workflows, but I’ve never been able to reach 720p.

    lug_LAug 13, 2025· 1 reaction

    Daxamur Hi, look, this might interest you. They say that if you add 3 KSamplers, it removes the slow movement. You could try adding one more sampler to the workflow—do you want to see if there’s more movement, like in this post? Check it out: https://huggingface.co/lightx2v/Wan2.2-Lightning/discussions/26

    Daxamur
    Author
    Aug 13, 2025· 1 reaction

    lug_L Very nice, I'll definitely play around with it and see if I can get it consistent - good looks!

    lug_LAug 13, 2025· 1 reaction

    Daxamur Thanks, my friend. I’ll be patient to see that next workflow—it’s surely going to be great with that 3 KSampler ❤️

    Daxamur
    Author
    Aug 13, 2025· 1 reaction

    lug_L Try out the new GGUF flow when you have the chance and let me know if that works for you - even lower potential memory usage than with block swap, depending on the GGUF model you go with!

    lug_LAug 13, 2025· 1 reaction

    Daxamur  I tried using the Q4_K_S model and there’s no way to run 720p. The WanImageToVideo just keeps loading forever at 98% VRAM, which is when it crashes due to the high resolution. I’ve tried using that resolution in another workflow before, and I don’t think there’s any chance for my 3080 😭

    Daxamur
    Author
    Aug 13, 2025

    lug_L Ahh rip, you should definitely be able to with a 3080 - as long as you've got some system RAM to spare. Could you pass me your console logs from when it fails?

    Daxamur
    Author
    Aug 14, 2025· 1 reaction

    lug_L Now that the MMAUDIO flow is up, this is my current WIP

    lug_LAug 22, 2025

    @Daxamur Sorry, I haven’t logged in for a few days. I’ll send you the memory error through a private message as it appears in the console. Sometimes it throws the error, other times it just gets super, super, super slow when trying to run at 720p resolution. What does happen is that in Crystools the memory goes up to 98%, and that’s when it doesn’t work at that resolution.

    RO4DHOGAug 13, 2025· 2 reactions
    CivitAI

    I had to delete the Convert FPS to FLOAT (HavocsCall) node for it to function. It said it was missing 'string' input. The adjoining node GET_FPS on the Save_Video nodes were locked at 8 FPS and it all seemed to work after that. I could also delete the orphaned Framerate node connections to allow modification of SaveVideo framerate.

    Daxamur
    Author
    Aug 13, 2025

    It sounds like there is a custom node conflict with the Get / Set or literal nodes in this flow, from the KJNodes and ComfyLiterals custom nodes. The HavocsCall node comes from a separate custom node suite, which I can confirm does give me issues if if I try to install it and use this flow - if you're still having issues, I'd recommend removing HavocsCall in lieu of KJNodes and ComfyLiterals

    RO4DHOGAug 15, 2025

    Daxamur Thanks, it was partialy because my version ComfyUI portable security, that required me to manually copy ComfyLiterals directory into my Extensions subdirectory. I just needed to read the console that indicated those needed instructions.

    mrazvanalexAug 13, 2025· 3 reactions
    CivitAI

    First of all, thanks for sharing this workflow! Looks great!

    It's really well organized but I keep manually expanding all the collapsed nodes to find how the image is used in the i2v workflow.

    It seems my image is not used as the start image, rather it uses prompt instead. The initial image seems completely ignored.

    Any idea why this might happen?

    Daxamur
    Author
    Aug 13, 2025· 1 reaction

    That's an interesting one - if you can share a pastebin dump of the flow with all the settings you're receiving an error on, as well as your custom node list I can definitely take a look!

    mrazvanalexAug 13, 2025· 1 reaction

    Daxamur Sorry... I'm just retarded. Hours later I noticed I was using the T2V instead of I2V model... It's because of too many models in the same folder I barely ever get to input the right ones.

    Thanks for the answer. Will post here when I get my final results.

    Daxamur
    Author
    Aug 13, 2025· 1 reaction

    mrazvanalex Solid, no worries - glad to hear it wasn't anything crazy!

    DK7Aug 13, 2025· 2 reactions
    CivitAI

    I set my image in "initial image" but the output is some random video? It doesnt use it. Do I have to check something? Also - why does it take AGES to load the models WanTEModel and WAN21 every single time you want to generate a video. I mean that defeats the whole purpose of generating fast...

    Daxamur
    Author
    Aug 13, 2025· 1 reaction

    It sounds like you may be hitting the same issue as @mrazvanalex, I would triple check that you are using the I2V models - in regard to the slow loading I would imagine this is probably going to come down to ComfyUI's memory management, if both the High and Low noise models along with their loras can't be stored across your VRAM and system RAM at the same time they will be unloaded and reloaded every execution. If you share your specs I'm happy to confirm this!

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    193
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/12/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    daxamursWAN22WorkflowsV121FLF2VT2V_OldT2VV11.zip