CivArchive
    WAN2.2 14B - Unlimited Long Video Generation Loop - v2.0_FIXED

    Unleash the full potential of the WAN2.2-I2V-A14B model. This isn't just a simple image-to-video converter; it's a professional-grade, automated studio designed to produce cinema-quality animations through an intelligent feedback loop. By leveraging the sheer power of the 14B parameter model, this workflow delivers unparalleled detail, motion consistency, and generation stability.

    🎬 A New Standard in AI Video Generation:

    • Dual-Stage Denoising Process: The secret to its stunning quality. This workflow employs a sophisticated two-model approach:

      • Stage 1 (High Noise): The Wan2.2-I2V-A14B-HighNoise model, empowered by a high-strength LoRA, acts as the creative engine. It establishes the core motion, composition, and dynamic elements of the scene.

      • Stage 2 (Low Noise): The Wan2.2-I2V-A14B-LowNoise model, with a refined LoRA, takes the initial output and enhances it. This stage cleans up artifacts, sharpens details, and ensures temporal stability, resulting in a polished, professional finish.

    • Precision Sampler Control: Utilizes KSamplerAdvanced nodes to give you exacting control over each denoising stage. Fine-tune the number of steps and sampling parameters for both the high-noise creative phase and the low-noise refinement phase independently.

    • AI-Powered Narrative Continuity: An integrated Ollama vision model (e.g., Qwen2.5-VL) analyzes the last frame of each generated clip. It then dynamically generates a new, context-aware prompt that logically continues the action, creating a seamless and evolving story across multiple generations.

    • Cinematic Output Ready: The workflow doesn't just stop at generation. It includes RIFE VFI frame interpolation, boosting the final output to a buttery-smooth 32 FPS for a truly professional viewing experience. Intermediate previews are also saved for quick checks.

    ⚙️ Technical Mastery:

    • Core Models: Wan2.2-I2V-A14B-HighNoise-Q5_0.gguf & Wan2.2-I2V-A14B-LowNoise-Q5_0.gguf

    • Specialized LoRAs: Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors (Stage 1) & Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors (Stage 2)

    • Vision Encoder: clip_vision_h.safetensors (Essential for the 14B model's advanced understanding)

    • VAE: wan_2.1_vae.safetensors

    • Generation: Produces 33 frames of high-quality video per loop iteration.

    🔄 How It Works:

    1. Input & Analysis: Your starting image is prepared. Ollama analyzes it to create a dynamic motion prompt.

    2. Video Encoding: The WanImageToVideo node encodes the image and prompts into the model's latent space.

    3. Dual-Model Generation: The encoded data undergoes a two-pass rendering process for maximum fidelity.

    4. Decoding & Loop: The result is decoded into a video clip. The last frame is extracted, color-matched for consistency, and fed back into the loop as the new input image.

    5. Final Assembly: All segments are combined and interpolated into a final, seamless long-form video.

    🎯 Designed For:

    • Quality Pioneers: Users who demand the highest possible video quality from current AI models.

    • Technical Enthusiasts: Those who appreciate and have the hardware to leverage advanced, multi-stage generative pipelines.

    • Content Creators: Professionals and hobbyists looking for a reliable tool to produce stunning, long-form animated content.

    • Storytellers: Anyone who wants to create evolving narratives and scenes with perfect continuity.

    ⚠️ Important Requirements:

    • High-End Hardware: This workflow is designed for systems with substantial VRAM and RAM to handle the 14B models efficiently.

    • ComfyUI Environment: Requires custom nodes: ComfyUI-Easy-Use, Video-Helper-Suite, ComfyUI-Ollama, and ComfyUI-Frame-Interpolation.

    • Ollama Server: Must be installed and running with a capable vision model like qwen2.5-vl:7b.

    This workflow represents the top tier of what is currently achievable with ComfyUI and the WAN2.2 architecture. It is a testament to the power of combining massive models with intelligent, structured pipelines.

    Download now and start generating unparalleled AI video narratives.

    Description

    FIXED VERSION OF THE V2.0

    FAQ

    Comments (26)

    TwiiiistASaintTropez769Oct 3, 2025
    CivitAI

    love the funny output but on most case it lose quickly character consistency, maybe try this one but with wan animate version including an option for character reference so it keep it ?

    szemeteskuka82421Oct 4, 2025
    CivitAI

    I would like to ask for a little help. I have the same problem in both versions. I have all the models for the workflow. Ollama is installed, I am logged in, and it is running. All nodes are installed and up to date. The GPU specifications are fine (RTX5090). However, a red circle appears next to the model in the LoraLoaderModelOnly Fusion X Lora Loader node. Error message:

    got prompt

    Failed to validate prompt for output 281:

    * LoraLoaderModelOnly 508:

    - Return type mismatch between linked nodes: model, received_type(WANVIDEOMODEL) mismatch input_type(MODEL)

    Output will be ignored

    Failed to validate prompt for output 433:

    Output will be ignored

    Failed to validate prompt for output 459:

    Output will be ignored

    Failed to validate prompt for output 424:

    Output will be ignored

    Failed to validate prompt for output 410:

    Output will be ignored

    Failed to validate prompt for output 440:

    Output will be ignored

    Failed to validate prompt for output 450:

    Output will be ignored

    Failed to validate prompt for output 411:

    Output will be ignored

    Failed to validate prompt for output 79:

    Output will be ignored

    Failed to validate prompt for output 439:

    Output will be ignored

    Failed to validate prompt for output 197:

    Output will be ignored

    Failed to validate prompt for output 505:

    Output will be ignored

    Prompt executed in 0.04 seconds

    the only thing I understand is "mismatch' so probably one or some of the model you loaded aren't compatible, check the vae, check the umt5-xxl version, I got a similar error, I just mixed models and at one point it started, in my case I think it was the umt5-xxl model that was giving error (maybe not use the scaled fp8, or do use it, might be one or the other, I ended using a gguf version q6

    another error I just got, for some reason one of the 'boc' refuse to take any sampler name and instead says "NaN" which of course make everything stop, so this case I solved it by entering inside the compund and unlink the sampler name from the Ksamplers and this way I could just manually put them in from the list (I ended using euler/simple, but UniPC beta also work)

    CyclopsGEROct 4, 2025· 3 reactions
    CivitAI

    Hi, not sure what I am doing wrong. Loaded a custom image ( a running man) and edited the prompt ( a man is running).
    I am always getting a total different image with a brunette woman who is holding a cup or something.
    My image and prompt is ignored.

    lemon95212Oct 4, 2025
    CivitAI

    错误

    Loop Detected 176,174,

    angelolinnnnOct 6, 2025

    same error need help

    zardozai
    Author
    Oct 10, 2025

    Because you need to start ComfyUI without the "--cache-none" parameter

    aiuserstevy964Oct 5, 2025· 1 reaction
    CivitAI

    Awesome workflow,fun to play with, thanks alot. But i agree , consistency is a bit of an issue. I dont know if possible , but i think injecting own pre-made first and last frames for each scene would be awesome. is that possible to build somehow ?

    zardozai
    Author
    Oct 20, 2025

    Yes, I am currently working on a workflow that aligns perfectly with your suggestion.

    sdktertiaire2Oct 5, 2025· 1 reaction
    CivitAI

    Hello, thank you. You are amazing! That's great and fantastic. I encountered a bug with Gguf just for cow-um5txxx-q8_0.gguf:

    custom_nodes\gguf\pig.py", line 336, in load_gguf_sd

    raise ValueError(f“Unknown architecture: {arch_str!r}”)

    ValueError: Unknown architecture: “cow”

    This was resolved simply by updating the custom node.

    For consistency, the solution is to use cartoon characters. The result is fun and consistent.

    Goodbye.
    SDK

    ScriptHunterOct 9, 2025· 2 reactions
    CivitAI

    Hi, you have a good workflow, but why, after the first segment (81 frames), do the second, third, and so on start to become increasingly contrasty? For example, with four segments (20 seconds of interpolated video), the final image differs greatly from the first in contrast; gray areas in the image become black, etc.

    zardozai
    Author
    Oct 20, 2025

    For improved results, select a higher initial resolution. This parameter directly contributes to enhanced similarity as well as consistency across iterative generation steps.

    leonard4701Oct 13, 2025· 5 reactions
    CivitAI

    How do you actually run it? I just get 1 frame, no matter how many loop_counts I put for the full video or the length under " CLIP / VAE / SETTINGS"

    Eduardo100Oct 31, 2025

    I also only get one image after 0.01 seconds.

    alepodjNov 7, 2025

    +1, i only get 1 image

    ComfyNSFWDec 2, 2025

    metoo

    AIdundeeProdDec 12, 2025

    ollama server running ?

    skpManiacJan 25, 2026

    yup, same here

    devilscrypto901Nov 26, 2025
    CivitAI

    Would is also be possible to get a non gguf workflow?

    tmoazzam742Dec 31, 2025

    CAN WE CHAT REGARDING THIS WORKFLOW

    tmoazzam742Dec 31, 2025· 1 reaction
    CivitAI

    IS THERE A VISUAL TUTORIAL FOR THIS WORKFLOW FOR LNG VIDEO GENERATON IF YES PLZ REFER

    Nazuna_VampiFeb 9, 2026· 1 reaction
    CivitAI

    >High-End Hardware: This workflow is designed for systems with substantial VRAM and RAM
    >It's a GGFU based workflow...

    Lol, ay lmao even.

    zardozai
    Author
    Feb 10, 2026· 1 reaction

    GGUF Q8_0 offers higher precision than FP8. It's nearly identical to FP16 (99.99%) but with the smaller size of FP8.

    FP8 closer to Q4_0. This discrepancy is expected and often arises from a misunderstanding of the technology.

    COPE HARDER !

    yukilengao748Mar 10, 2026
    CivitAI

    rtx 5060 Ti 16g can running this?

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    3,966
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/4/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    wan2214BUnlimitedLong_v20FIXED.zip

    Mirrors