CivArchive
    WAN2.2 14B - Unlimited Long Video Generation Loop - v1.0

    Unleash the full potential of the WAN2.2-I2V-A14B model. This isn't just a simple image-to-video converter; it's a professional-grade, automated studio designed to produce cinema-quality animations through an intelligent feedback loop. By leveraging the sheer power of the 14B parameter model, this workflow delivers unparalleled detail, motion consistency, and generation stability.

    🎬 A New Standard in AI Video Generation:

    • Dual-Stage Denoising Process: The secret to its stunning quality. This workflow employs a sophisticated two-model approach:

      • Stage 1 (High Noise): The Wan2.2-I2V-A14B-HighNoise model, empowered by a high-strength LoRA, acts as the creative engine. It establishes the core motion, composition, and dynamic elements of the scene.

      • Stage 2 (Low Noise): The Wan2.2-I2V-A14B-LowNoise model, with a refined LoRA, takes the initial output and enhances it. This stage cleans up artifacts, sharpens details, and ensures temporal stability, resulting in a polished, professional finish.

    • Precision Sampler Control: Utilizes KSamplerAdvanced nodes to give you exacting control over each denoising stage. Fine-tune the number of steps and sampling parameters for both the high-noise creative phase and the low-noise refinement phase independently.

    • AI-Powered Narrative Continuity: An integrated Ollama vision model (e.g., Qwen2.5-VL) analyzes the last frame of each generated clip. It then dynamically generates a new, context-aware prompt that logically continues the action, creating a seamless and evolving story across multiple generations.

    • Cinematic Output Ready: The workflow doesn't just stop at generation. It includes RIFE VFI frame interpolation, boosting the final output to a buttery-smooth 32 FPS for a truly professional viewing experience. Intermediate previews are also saved for quick checks.

    ⚙️ Technical Mastery:

    • Core Models: Wan2.2-I2V-A14B-HighNoise-Q5_0.gguf & Wan2.2-I2V-A14B-LowNoise-Q5_0.gguf

    • Specialized LoRAs: Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors (Stage 1) & Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors (Stage 2)

    • Vision Encoder: clip_vision_h.safetensors (Essential for the 14B model's advanced understanding)

    • VAE: wan_2.1_vae.safetensors

    • Generation: Produces 33 frames of high-quality video per loop iteration.

    🔄 How It Works:

    1. Input & Analysis: Your starting image is prepared. Ollama analyzes it to create a dynamic motion prompt.

    2. Video Encoding: The WanImageToVideo node encodes the image and prompts into the model's latent space.

    3. Dual-Model Generation: The encoded data undergoes a two-pass rendering process for maximum fidelity.

    4. Decoding & Loop: The result is decoded into a video clip. The last frame is extracted, color-matched for consistency, and fed back into the loop as the new input image.

    5. Final Assembly: All segments are combined and interpolated into a final, seamless long-form video.

    🎯 Designed For:

    • Quality Pioneers: Users who demand the highest possible video quality from current AI models.

    • Technical Enthusiasts: Those who appreciate and have the hardware to leverage advanced, multi-stage generative pipelines.

    • Content Creators: Professionals and hobbyists looking for a reliable tool to produce stunning, long-form animated content.

    • Storytellers: Anyone who wants to create evolving narratives and scenes with perfect continuity.

    ⚠️ Important Requirements:

    • High-End Hardware: This workflow is designed for systems with substantial VRAM and RAM to handle the 14B models efficiently.

    • ComfyUI Environment: Requires custom nodes: ComfyUI-Easy-Use, Video-Helper-Suite, ComfyUI-Ollama, and ComfyUI-Frame-Interpolation.

    • Ollama Server: Must be installed and running with a capable vision model like qwen2.5-vl:7b.

    This workflow represents the top tier of what is currently achievable with ComfyUI and the WAN2.2 architecture. It is a testament to the power of combining massive models with intelligent, structured pipelines.

    Download now and start generating unparalleled AI video narratives.

    Description

    FAQ

    Comments (24)

    luk085154Aug 25, 2025
    CivitAI

    How to write own prompt to enhance by ollama?

    zardozai
    Author
    Aug 25, 2025· 1 reaction

    I'll make a T2v version and upload ASAP.

    DontMindMeLoveAug 26, 2025· 2 reactions

    Use Ollama to generate a system prompt tweaked to your own desires. If you use an uncensored llm like mistral.small then you can tweak it to generate prompts that may be inappropriate 😉

    zardozai
    Author
    Aug 26, 2025

    Yes, you can select "keep context" in the Ollama node and edit the prompt directly for a more personalized storyline.

    DontMindMeLoveAug 26, 2025
    CivitAI

    The problem I see with this approach is that there is no flowing story. Each pass through Ollama just gets an image description and tries to make something relatively coherent. Perhaps using context or by inputting a storyboard would work. Will need a much more sophisticated system prompt I suspect.

    zardozai
    Author
    Aug 26, 2025

    Yes, you can select "keep context" in the Ollama node and edit the prompt directly for a more personalized storyline.

    DontMindMeLoveAug 26, 2025· 1 reaction

    @zardozai You need to remember to reset context between runs or better to include a reset at start of workflow. I think this is where the next level of vid gen will be headed. It gets very complicated very quickly when trying to generate a coherent storyline over multiple segments. Coming up with a generic system prompt that handles whatever you throw at it well would likely be impossible. Yet another rabbit hole to jump down.
    Your workflow is a great starting point as it is easy to follow the loops flow and just have to concentrate on the system prompt to get some really great videos out of it. I am using two ollama nodes. Joytag for i2t then mistral.small in the primary ollama node. That gives me an uncensored flow with great I2V and flexible prompt gen.

    vladulidloAug 26, 2025· 5 reactions
    CivitAI

    For those who do not read all text and just eye-scan text, make sure to download qwen2.5vl:7b model and not qwen2.5:7b. Or you will get a random prompt generator which does not correspond to the image :-)

    mag4BlackAug 27, 2025· 4 reactions
    CivitAI

    For cleaning VRAM after Ollama set "Keep Alive" to 0 mimutes. It will flush Ollama model from VRAM before loading WAN

    beyondpd307Aug 29, 2025
    CivitAI

    The Ollama network is not working, is there a way to download the models and use them offline?

    zardozai
    Author
    Aug 29, 2025

    Ollama is offline; you need to install it and point it to your Ollama server.

    beyondpd307Aug 29, 2025

    ok ,i got it

    zardozai
    Author
    Aug 29, 2025

    @beyondpd307 don't forget to Install a vision model into you Ollama server I recommend Qwen2.5 VL 7B

    DarkArtsAISep 4, 2025· 1 reaction

    @zardozai Hiiiii, so like, I don't get the ollama part, I have ollama installed, I have the qwen 2.5 model, but I can't for the life of me figure out how it talks to Comfy, appreciate the assistance :3

    UPDATE: I no longer need help with ollama, I decided to yeet it out and experiment, now I have another issue though, does anyone know how I can make it that I can define multiple prompts for every extension loop? Cheers! :3

    UPDATE 2: I figured out what to replace the ollama part of the workflow with, there's a node set called CreaPrompt Multi Prompts, where you can have a multi-line positive prompt window, where every line is the next prompt for the next loop segment, giving you full control of the story in case you don't wanna use ollama like I ended up deciding. Man, ComfyUI is fun! Okay, thanks for the workflow and have fun everyone! :3

    Sara_and_HannahSep 8, 2025

    @DarkArtsAI HI DarkArts, any chance you can show that Ollama swap you made? A screenshot will do. Feel like I'm trying to fix too many things at the same time and incorporate the CreaPrompt...

    DarkArtsAISep 5, 2025· 5 reactions
    CivitAI

    So in case there's someone else like me, who decided that ollama is not the right tool for the job, I figured out what to replace the ollama part of the workflow with, there's a node set called CreaPrompt Multi Prompts, where you can have a multi-line positive prompt window, where every line is the next prompt for the next loop segment, giving you full control of the story in case you don't wanna use ollama like I ended up deciding. Man, ComfyUI is fun! Okay, thanks for the workflow and have fun everyone! :3 If y'all want the modified version of the workflow, lemme know, I'll post it, with all credit to @zardozai of course :3

    zerocool22Sep 10, 2025· 1 reaction

    Yes, post it. :)

    commuting183Sep 11, 2025

    Interested as well, please post it!

    DarkArtsAISep 12, 2025

    @zerocool22 @commuting183 I hear you friends! :3 Lemme tidy it up a bit so it's presentable and I'll post it :3

    DarkArtsAISep 12, 2025· 1 reaction
    repsycle471Sep 10, 2025· 2 reactions
    CivitAI

    instead of using an ai, i would just grab the last frame, refresh the cache, and start again, stitching is easy

    dxjaymzSep 18, 2025· 2 reactions
    CivitAI

    I just wanna add that if you want to do nsfw prompts you should go with the Qwen2.5-VL-7B-Instruct-abliterated-GGUF

    Jdoe666Oct 3, 2025

    ok but whree does it go what folder

    dxjaymzOct 4, 2025

    @Jdoe666 ComfyUI\models\llm_gguf (I dont remeber who created this folder maybe the llm node)

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,202
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/25/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    wan2214BUnlimitedLong_v10.zip

    Mirrors