CivArchive

    https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.11.0_cu130_cp313_sageattention_triton - my new ComfyUI build with torch 2.11.0cu130 + sageattention + my workflows

    Hi! I intrduce my WAN 2.2 Models 8 steps

    NEW! Wan_2.2_I2V_SVI_2_PRO_8steps models

    merged with SVI 2 PRO loras - you don't need choose SVI lora to make long video

    https://huggingface.co/StefanFalkok/Wan_2.2_I2V_SVI_2_PRO_8steps - merged models + special workflow for those models + hunyuanvideofoley to add sound towards videos

    Download

    FP8/FP16 + Workflows - https://huggingface.co/StefanFalkok/Wan_2.2_10steps/tree/main

    GGUF + Workflows - https://huggingface.co/StefanFalkok/Wan_2.2_10steps_GGUF/tree/main (Q8_0 and Q4_K_M)

    Everything u need - just load model without light lora, set 8 steps (4/4), set 2cfg on High Noise and cfg1 on low noise, set 81 frame (16 framerate), set resolution from 576p to 720p

    I merged original WAN 2.2 Models from ComfyUI repository with LightX T2V Rank 256 bf 16 (https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank256_bf16.safetensors)

    fp8 results on rtx 5080

    1024x576 - around 3 minutes

    720p - around 5 minutes

    Q8_0 and FP16 take a 20% more time to generate video, but you get more quality and stable result

    My TG Channel - https://t.me/StefanFalkokAI

    My TG Chat - https://t.me/+y4R5JybDZcFjMjFi

    Description

    Wan 2.2 I2V HighNoise 10 Steps FP8

    FAQ

    Comments (14)

    GlowingGuardianGirlOct 31, 2025· 1 reaction
    CivitAI

    Finally merged! Great.
    Would have great success if it wasn't only for high-end 50XX GPUs. Do you plan on merging Q3_K_M / Q4_K_S for low-VRAM gen?

    Stefan_Falkok
    Author
    Oct 31, 2025

    Hi. I think I'll quantize these models in the near future. I recommend use fp8 model and download my workflow in description at the moment

    Stefan_Falkok
    Author
    Oct 31, 2025

    I want ask you - maybe you want Q4_K_M for better quality or u need Q4_K_S model ?

    rivdemon1221554Oct 31, 2025· 1 reaction

    No problem on a 4080super with 32GB RAM. You're exaggerating.

    JellaiOct 31, 2025· 1 reaction

    I can use fp8 fine on my 3090 with 24GB VRAM.

    Stefan_Falkok
    Author
    Oct 31, 2025

    @Jellai it's very cool

    faryarjy100Oct 31, 2025
    CivitAI

    Hi.how you use f16 ? 28gb is bigger than 16gb vram?

    Stefan_Falkok
    Author
    Oct 31, 2025

    Hi. I use Q8_0 instead because the quality is closer to FP16, but weights two times less and it eats up as many resources as FP8 model. So if you want use FP16 model, but you have a low VRAM and RAM - use Q8_0

    faryarjy100Oct 31, 2025

    @Stefan_Falkok .thank you. i asked chatgpt and he tootally confiused me. he told f8 is better q8. after i showed many link about that GGFU untill he accepted that GGFU can useed insted of fp.i want to sure that as you say Q8_0 is GGUF and better than FP8?

    Stefan_Falkok
    Author
    Oct 31, 2025

    @faryarjy100 chatgpt often lies about local models. In my experience , Q8_0 is better in quality and more stable in prompts than FP8, and Q8_0 closest in quality to FP16. So I recommend use Q8_0 model, if you haven't enough VRAM/RAM

    faryarjy100Oct 31, 2025

    with 5080 16vram and 32 gig ram on laptop what you sugest?

    Stefan_Falkok
    Author
    Oct 31, 2025

    @faryarjy100 oh. I thought you have 64 ram, and you'd easily used Q8_0, but I'm in doubt. You can try use FP8 and Q8_0. Otherwise, I'll release models with less quantization. But I recommend increase RAM Up to 64 (better option is 96 or 128 RAM) to use Q8_0 model

    faryarjy100Oct 31, 2025· 2 reactions

    i will increase as your recommended.thank you so mach for your patience and guidance.

    Stefan_Falkok
    Author
    Oct 31, 2025

    @faryarjy100 I'm glad to help

    Checkpoint
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,297
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/30/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    wan228StepsSVI2PROT2VI2VFP8_fp8I2vhighnoise.safetensors