CivArchive
    Wan 2.2 - FFLF - XT 404 - Master UI - FFLF - XT 404 - Master UI
    Preview 115243686

    Just my personal workflow, i am not responsible for creating the model or the nodes!!!

    Use the nodes and models below in the description!!!

    Nodes >   https://github.com/XT-404/XT-404_SKYNET

    Lora Loader https://github.com/HenkDz/rgthree-comfy

    Models> https://civarchive.com/models/2244942?modelVersionId=2527274


    🤖 XT-404 Skynet Suite: Wan 2.2 Integration

    The "Omega Edition" for ComfyUI

    Skynet Banner Version Architecture License

    The XT-404 Skynet Suite is a highly specialized, battle-tested collection of custom nodes for ComfyUI, specifically engineered for Wan 2.1 and 2.2 video diffusion models.

    Unlike standard nodes, this suite focuses on "Visual Supremacy"—achieving 8K, OLED-grade quality with mathematical precision. It abandons generic processing for heuristic, context-aware algorithms that protect signal integrity, manage VRAM surgically, and eliminate digital artifacts.


    ⚠️ Requirements

    • ComfyUI: Latest version recommended.

    • Wan 2.2 Models: Ensure you have the VAE, CLIP, and UNet/Transformer models.

    • Python: 3.10+.

    • FFmpeg: Required for the Compressor node (usually via imageio-ffmpeg).

    Caution

    INFILTRATION PROTOCOL (GGUF): To utilize GGUF Quantized Models with the Cyberdyne Model Hub, the ComfyUI-GGUF engine is REQUIRED. 📥 Download Engine: city96/ComfyUI-GGUF Without this engine, the Cyberdyne Model Hub will operate in Safetensors-only mode.


    🚀 Key Features

    • Zero-Point Noise Injection: Eliminates static "snow" in video generation.

    • ARRI Rolloff Tone Mapping: Prevents white clipping even in high-contrast scenes.

    • Nano-Repair (Genisys): Real-time tensor monitoring to prevent black screens/NaNs caused by TF32 precision.

    • OLED/8K Workflow: Dedicated pipeline for deep blacks, organic grain, and micro-detail hallucination.

    • Sentinel Telemetry: Real-time console logs ("The Mouchard") that analyze saturation, clipping, and VRAM usage per step.


    📦 Installation

    Navigate to your ComfyUI custom nodes directory:

    cd ComfyUI/custom_nodes/

    Clone this repository:

    git clone https://github.com/YourUsername/XT-404-Skynet-Suite.git

    Install requirements:

    pip install imageio-ffmpeg scikit-image

    🛠️ Module Breakdown

    1. The Core Engine (XT404_Skynet_Nodes.py)

    The heart of the generation process. Replaces standard KSamplers with a hybrid engine optimized for Wan's Flow Matching.

    • Zero-Point Fix: Ensures 0 + Noise = Pure Noise, clearing the latent before injection.

    • Wan Sigma Calculator: Uses the specific shift formula required by Wan 2.1/2.2.

    • Chain Architecture: Facilitates "Hires Fix" by passing the master sigmas clock between nodes.

    2. Universal Loader (cyberdyne_model_hub.py)

    A unified loader for Checkpoints, SafeTensors, and GGUF models.

    • Recursive Search: Finds models in subdirectories automatically.

    • GGUF Delegation: Detects GGUF files and routes them to the appropriate backend.

    • Smart Offload: Aggressively offloads unused models to RAM to free VRAM for the sampler.

    3. Visual Supremacy Suite (wan_visual_supremacy.py)

    The "Secret Sauce" to cure the "AI Plastic Look."

    • Latent Detailer X: Injects micro-details before decoding while preventing signal saturation.

    • Temporal Lock Pro: A post-decode stabilizer that blends low-delta frames to eliminate flicker.

    • OLED Dynamix (ARRI Rolloff): Logarithmic compression curve that preserves highlight textures.

    4. Nano-Repair System (wan_genisys.py)

    • Node: Cyberdyne Genisys [OMNISCIENT]

    • Function: Solves "Black Screen" issues in TF32/BF16 by calculating tensor drift and clamping values before they hit NaN.

    5. T-X Interpolator (wan_tx_node.py)

    • Function: Generates video between a Start and End image.

    • Innovation: Uses Inverse Structural Repulsion to force the model to hallucinate a transformation path rather than a simple blend.


    For the ultimate 8K OLED look, chain the nodes in this specific order:

    1. Loader: Cyberdyne Model Hub (Load Model & VAE).

    2. Prompt: Wan Text Cache & Wan Vision Cache.

    3. Generation: WanImageToVideoUltraXT-404 Skynet 1 (Master).

    4. Refinement: XT-404 Skynet 3 (Refiner) (Denoise 0.3).

    5. Decode: VAE Decode.

    6. Visual Supremacy Stack:

      • Temporal Lock Pro (Stabilize pixels).

      • OLED Dynamix (Sculpt light).

      • Organic Skin (Add texture).

    7. Final Polish: Wan Chroma Mimic (Validate signal & sharpen).

    8. Encode: Video CombineWan Compressor.


    📟 The Console HUD (XT-Mouchard)

    Don't ignore the console! The suite communicates signal health:

    • 🟢 GREEN: Signal is healthy.

    • 🟡 YELLOW: High signal detected (Rolloff is active).

    • 🔴 RED: Critical saturation/clipping (Lower specular_pop).

    Example Log:

    [XT-MIMIC] 🎨 FINAL VALIDATION | DynRange: [0.000, 0.982]
       └── Signal Integrity: OK (Clip: 0.00%)
    

    This indicates mathematically perfect blacks and whites capped at 98.2% to allow for display bloom.


    📜 Credits

    • Architect: XT-404 Omega

    • Corp: Cyberdyne Systems

    • Status: GOLD MASTER (V3.8)

    "There is no fate but what we make."


    Maintained by Cyberdyne Research Division. Open an issue for "Infiltration Reports."


    🤖 XT-404 Skynet : Wan 2.2 Sentinel Suite (OMEGA EDITION)

    Cyberdyne Systems Corp. | Series T-800 | Model 101

    Version Architecture Engine New Module

    "The future is not set. There is no fate but what we make for ourselves."


    ⚠️ CRITICAL SYSTEM DEPENDENCY

    Caution

    INFILTRATION PROTOCOL (GGUF): To utilize GGUF Quantized Models with the Cyberdyne Model Hub, the ComfyUI-GGUF engine is REQUIRED.

    📥 Download Engine: city96/ComfyUI-GGUF

    Without this engine, the Cyberdyne Model Hub will operate in Safetensors-only mode.


    🚀 WHY CHOOSE XT-404 SKYNET? (Competitive Analysis)

    Standard nodes rely on generic implementations. XT-404 Skynet is a custom-engineered architecture built specifically for the quirks of Wan 2.2.

    Feature Standard Nodes / Competition 🤖 XT-404 Skynet Architecture Precision Standard FP16/BF16 (Prone to Banding) Hybrid FP32/TF32 Contextual Switching (Zero Banding) Interpolation Basic Linear Fades (Static/Frozen) T-X Dual-Phase Wrapper (Native VAE Injection) Color Science RGB Clipping LAB Space Transfer & OLED Dynamics (Cinema Grade) Caching Basic TeaCache (Motion Freeze Risk) T-3000 Genisys w/ Kinetic Momentum & Nano-Repair Scaling Bilinear (Blurry) Lanczos/Bicubic FP32 (Pixel Perfect) Memory High VRAM Usage (OOM Risk) Surgical Pinned Memory (DMA) & Aggressive Purge


    🌍 NEURAL NET NAVIGATION

    🇺🇸 ENGLISH DOCUMENTATION

    1. Visual Engineering (Wan Chroma Mimic)

    2. Infiltration (Model Loader)

    3. Neural Net Core (XT-404 Samplers)

    4. T-3000 Genisys (Omniscient Cache)

    5. Mimetic Rendering (I2V Ultra & Fidelity)

    6. Polymetric Alloy (T-X Dual-Phase) 🆕

    7. Sensors & Accelerators (Omega Tools)

    8. Post-Processing & Automation

    🇫🇷 DOCUMENTATION FRANÇAISE

    Consultez la version française pour les détails techniques complets.


    🇺🇸 ENGLISH DOCUMENTATION

    🎨 Phase 0: Visual Engineering (Wan Chroma Mimic)

    File: wan_chroma_mimic.py

    The Ultimate Color Grading Engine. This is not a simple filter. It operates in real-time on the GPU, converting image tensors to the LAB Color Space to separate luminance from color information, allowing for cinema-grade referencing without destroying lighting data.

    🔥 Key Features & Configuration

    • Architecture: 100% PyTorch GPU. 0% CPU latency.

    • Morphological Filter: Removes micro-artifacts (black/white dots) generated by video diffusion before they expand.

    • OLED Dynamics: Applies a non-linear S-Curve centered on 0.5 to deepen blacks while preserving peak highlights.

    Parameter Recommended Description reference_image REQUIRED The source image (style reference). The mood is extracted from here. effect_intensity 0.25 Blending strength of the LAB transfer. oled_contrast 0.00 The "Netflix" Look. Boosts dynamic range. 0.0 = Neutral. skin_metal_smooth 0.25 Smart Surface Blur. Smoothes skin/metal but detects edges to keep sharpness. detail_crispness 0.2 Cinema Piqué. Enhances micro-details using a difference-of-gaussians approach.


    🛡️ Phase 1: Infiltration (Cyberdyne Model Hub)

    File: cyberdyne_model_hub.py

    A unified loader bridging Safetensors and GGUF architectures. It solves the "Dual-UNet" requirement of Wan 2.2 automatically.

    • Recursive Scanner: Finds models in subfolders.

    • Skynet Protocol: Active VRAM management. It calculates the checksum (SHA256) and purges memory before loading to prevent fragmentation.

    • Hybrid Loading: Can load a High-Res FP16 model and a Low-Res GGUF model simultaneously.


    🧠 Phase 2: Neural Net Core (XT-404 Samplers)

    File: XT404_Skynet_Nodes.py

    The "Sentinel" engine. Unlike standard samplers, these are hard-coded with the simple (Linear) scheduler required by Wan 2.2, preventing the "fried output" issues seen with standard KSamplers.

    🔴 XT-404 Skynet 1 (Master)

    • Shift Value (5.0): The critical setting for Wan 2.2 latent timing.

    • Bongmath Engine: A custom texture-noise injection system.

      • True: Adds analog film grain coherence.

      • False: Pure digital cleanliness.

    🟡 XT-404 Skynet 2 (Chain)

    • Seed Lock: Automatically inherits the seed from the Master node via the options dictionary. Ensures temporal consistency across generation passes.

    🟢 XT-404 Skynet 3 (Refiner)

    • Resample Mode: Injects controlled noise at the end of the chain to hallucinate high-frequency details.


    💀 Phase 3: T-3000 Genisys (Omniscient Cache)

    File: wan_genisys.py

    Superior to TeaCache. Standard TeaCache freezes video motion when the difference is too low. T-3000 uses "Kinetic Momentum".

    • Kinetic Momentum: If motion is detected, it forces the next X frames to calculate, preventing the "mannequin challenge" effect.

    • Nano-Repair: Detects NaN or Inf values (black screen bugs) in the tensor stream and surgically repairs them using soft-clamping (-10/+10) instead of hard clipping.

    • HUD: Displays real-time signal integrity and drift metrics in your console.


    🎭 Phase 4: Mimetic Rendering (I2V Ultra & Fidelity)

    Files: nodes_wan_ultra.py / wan_fast.py

    🌟 Wan Ultra (The Quality King)

    • Nuclear Normalization: Sanitizes input images to strictly 0.0-1.0 range using Bicubic-AntiAlias.

    • Detail Boost: Applies a sharpening convolution matrix before VAE encoding to counteract compression blur.

    • Motion Amp: Uses a "Soft Limiter" (Tanh curve) to amplify motion vectors without breaking physics.

    ⚡ Wan Fidelity (The Speed King)

    • Optimization: Uses torch.full instead of concatenations for memory efficiency.

    • Logic: Restores the original Wan 2.1 context window logic for perfect temporal coherence.


    🧪 Phase 6.5: Polymetric Alloy (T-X Dual-Phase) [NEW]

    File: wan_tx_node.py

    The Interpolation Singularity. Standard I2V models struggle to reach a specific end frame (often freezing or losing style). The T-X Engine uses a Native VAE Injection Wrapper to bridge the timeline perfectly.

    • Keyframe Injection: Temporarily overrides the VAE's internal logic to encode [Start Frame -> Empty Void -> End Frame] without corrupting the latent space.

    • Fluid Morphing: Forces the Wan 2.2 model to solve the physics equation between Point A and Point B, preventing "slideshow" effects.

    • Smart VRAM Scanner: Automatically detects GPU capacity to switch between "Safe" (512px tiling) and "Ultra" (1280px tiling) modes.

    Parameter Description start_image The origin frame (Frame 0). end_image The target frame (Frame N). The T-X engine forces convergence to this image. motion_amp Amplifies the latent motion vectors between keyframes. detail_boost Pre-processing sharpening to retain texture during VAE compression.


    ⚡ Phase 5: Sensors & Accelerators (Omega Tools)

    🚀 Wan Hardware Accelerator (Anti-Burn V4)

    File: wan_accelerator.py The "Secret Sauce" of performance.

    • Problem: Enabling TF32 on Wan 2.2 normally "burns" images (contrast issues) due to normalization errors.

    • Solution (Contextual Switching): This node enables TF32 globally for speed, but intercepts GroupNorm and LayerNorm layers to force them into FP32 precision.

    • Result: 30% speed boost of TF32 with the visual quality of FP32.

    👁️ Wan Vision & Text Cache (DMA)

    File: wan_i2v_tools.py

    • Pinned Memory: Uses CPU Page-Locked memory (DMA) to transfer text embeddings to GPU instantly.

    • Vision Hash: Hashes the image content (including stride) to avoid re-encoding the same CLIP Vision input.


    🛠️ Phase 6: Post-Processing & Automation

    • Wan Compressor (Omega): Thread-safe H.265 encoding. Limits CPU threads to 16 to prevent Threadripper/i9 crashes.

    • Wan Cycle Terminator: Uses Windows API EmptyWorkingSet to flush RAM standby lists (prevents OS stutter).

    • Auto Wan Optimizer: Smart resizer that enforces Modulo 16 dimensions (required by Wan) and protects against OOM (>1024px).

    Description

    V.1.0

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    63
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/26/2025
    Updated
    12/27/2025
    Deleted
    -