CivArchive
    WAN 2.2 - T2V - LightX - 4 Steps - UltraFast - ULTIMATE Workflow - AUTO PROMPT - LLM Agent - Full Local - HD - 16 VRAM - 64 RAM - Native + LLM v2.1
    NSFW

    WAN 2.2 – T2V – LightX – 4 Steps – UltraFast – ULTIMATE Workflow – AUTO PROMPT – LLM Agent – Fully Local – HD – 16 VRAM – 64 RAM – Sage Attention – Torch Compile

    The WAN 2.2 workflow is the ultimate solution for text-to-video (T2V) creation. It combines cutting-edge performance with extreme ease of use, while remaining 100% local for total control and maximum privacy. It leverages the latest innovations such as LightX, optimized LoRAs, Sage Attention, Torch Compile, as well as a local LLM agent for automatic prompt generation.

    🚀 Main Features:

    • WAN 2.2: Ultra-stable and fast T2V video generation engine.

    • LightX Acceleration: Dramatically reduces rendering time while maintaining exceptional visual quality.

    • Sage Attention: Advanced attention management for improved coherence in video outputs.

    • Torch Compile: Automatic performance optimization via PyTorch dynamic compilation, for an even faster and smoother workflow.

    • 4-Step Process: Simple and quick setup.

    • UltraFast Rendering: Designed for high-throughput production with maximum efficiency.

    • Auto Prompt with LLM Agent: Automatic generation of optimized prompts through a local LLM, minimizing manual input.

    • Fully Local Setup: Operates entirely offline with no cloud dependency.

    • LightX LoRA Models: Two lightweight yet powerful LoRA models for high-quality visual generation.

    • High & Low Noise Models: Fine-tune quality and style with two distinct noise model types.

    • HD Output (1280×720p): Crisp and detailed high-definition video.

    • Optimized for 16 GB VRAM and 64 GB RAM: Fully leverages hardware resources for smooth operation.

    💡 Workflow Advantages:
    ⚡ Ultra-fast rendering, even on complex scenes.
    🤖 Automatically generated prompts to save time.
    🔒 Zero cloud dependency: everything runs locally with full data control.
    🎯 Enhanced visual coherence thanks to Sage Attention.
    🔧 Advanced performance optimization with Torch Compile.
    🎥 HD video quality for professional-grade results.

    🖥 Recommended Setup:

    • GPU: Minimum 16 GB VRAM

    • RAM: 64 GB recommended

    • Video Output: 1280×720p

    Description

    # 📌 Patch Note – v2.0 (Native Edition)

    The new v2.0 marks a complete overhaul of the workflow.

    Rebuilt from scratch with native nodes, it preserves the power and simplicity of WAN 2.2 while gaining in stability, fluidity, and performance.

    This update optimizes resource management, strengthens Sage Attention, improves LightX support, and refines video generation for more coherent and faster results.

    ---

    ## ✨ Major New Features

    - 🔄 Complete rebuild with native nodes → no more wrapper dependency, reinforced stability.

    - ⚡ LightX High & Low Noise → two variants for maximum acceleration and precise control over quality/speed.

    - 🎯 Sage Attention

    - 🛠 Native performance optimizations

    - 📈 Extended compatibility → support for GGUF models and native optimizations.

    - 🎥 Smoother rendering

    ---

    ## 🚀 Retained Advantages from v2.0

    - ⚡ Ultra-fast rendering

    - 🤖 Auto Prompt – Local LLM Agent to automatically generate adapted prompts. (A non-LLM version is coming soon!)

    - 🔧 Torch Compile for dynamic optimizations.

    - 🔒 Fully Local → 100% offline, no cloud dependency.

    - 🖼 HD Output (1280×720p) for professional results.

    - 💽 Optimized for 16 GB VRAM / 64 GB RAM

    ---

    ## 🖥 Recommended Setup

    - GPU: Minimum 16 GB VRAM

    - RAM: 64 GB recommended

    - Video Output: 1280×720p

    ---

    ## 🔁 Migration

    - The old Wrapper-based workflow is no longer updated.

    ---

    ## 🔮 Summary

    v2.0 is a rebirth: more stable, faster, and better optimized.

    It keeps the strengths of WAN 2.2, removes wrapper limitations, and strengthens Sage Attention for significantly improved visual coherence.

    FAQ

    Comments (6)

    blobby99Aug 31, 2025· 7 reactions
    CivitAI

    If you have Blackwell (and maybe older GPU gens), avoid fp8 scaled for any model. In A-B testing, I've just discovered fp8 scaled models are massively degraded- making motion and prompt adherence terrible (especially when using speedup LightX). This may be down to recent Comfy "improvements"- the coders of Comfy are just terrible- but either way at this time go GGUF- even smaller quants than Q8- they work much better.

    SKroUserIA
    Author
    Aug 31, 2025

    Thanks for your help I have a 5080 rtx 16 vream 64 ram.

    gimatiplease106Sep 1, 2025· 3 reactions

    You're probably download the fp8_scaled from ComfyUI_Org repo... the quality was not as good. You might want to try Kijai's version of fp8_scaled, he optimized it very well and much more improved in my testing (see some of my videos in profile, they are all using fp8_scaled)

    https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main

    SKroUserIA
    Author
    Sep 1, 2025

    @gimatiplease106 Thank you, I will try that.

    Anon0815Oct 9, 2025
    CivitAI

    Don't know if that's common knowledge but is there a way to skip the LLM's Prompt Generation part and just go with the already generated prompt. This would speed-up testing different Settings.
    Cheers.

    SKroUserIA
    Author
    Oct 18, 2025

    Merci pour ton commentaire. il est vrais que je n'ai pas eu le temps depuis la création de ce workflow pour le faire car je suis actuellement en train de créer un workflow pour faire des videos longue "illimité" facilement.

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    1,000
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/31/2025
    Updated
    4/28/2026
    Deleted
    -

    Files

    wan22T2VLightx4StepsUltrafastULTIMATE_nativeLLMV20.zip

    wan22T2VLightx4StepsUltrafastULTIMATE_nativeLLMV21.zip