CivArchive
    Qwen-Image-Edit-2509_clear - Q3_K_L
    Preview 110955135
    Preview 110954060

    Qwen-Image-Edit-2509_clear

    Compare to Qwen-Image-Edit-2509 (Original)

    Qwen-Image-Edit-2509_clear | Qwen-Image-Edit-2509 (Original)

    A fine-tuned version of Qwen-Image-Edit-2509 model, designed to produce clearer and more vibrant outputs.

    Compared to the original, this model generates illustrations with higher contrast, richer colors, and more refined details.

    General

    • CFG scale: 2.5

    • Sampler:

      • Euler (Speed)

      • heunpp2 (Quality)

    • Scheduler: beta

    • Steps: 14 (12-24)

    • ModelSamplingAuraFlow: shift 4 (3-6)

    Upscaler

    your choice, examples:

    LoRAs

    While we haven't specifically tested individual LoRAs, the Qwen-Image-Edit-2509_clear model is only a fine-tuned version of Qwen-Image-Edit-2509.

    Therefore, we believe that LoRAs designed for Qwen-Image-Edit-2509 should still be effective.


    Tips (External site)


    License

    Description

    FAQ

    Comments (14)

    GlowingGuardianGirlNov 20, 2025
    CivitAI

    20Gb for Q8_0... what kind of GPU is required to run those crazy models?

    happylittleteapotNov 20, 2025

    With offloading, most can. You generally need VRAM + RAM. Most will have say 8+16, so that's 24. Obviously there's the clip and VAE, too, which also will go through the same loading process.

    easygoing0114
    Author
    Nov 20, 2025· 2 reactions

    If you allow partial loading instead of full model loading, BF16 format will work with 16GB of VRAM. I’m generating images in about 10-15 minutes using an RTX 4060 Ti 16GB.

    The key points are load the text encoder into system RAM instead of VRAM, and allocate as much VRAM as possible to ComfyUI.

    For more details, please refer to the workflow in the Guide, and Tips article.

    neo_neil_71Nov 21, 2025

    how wil lthis model work on a double 3090 with a total of 48 GB available? is there anyone who is utilizing the double GPU setup for this model effectively?

    fantastoriumNov 24, 2025· 1 reaction

    I can run it on my 4060 Ti (16GB VRAM) even without forced CLIP offloading.

    qekNov 24, 2025

    @fantastorium I can, but it's very slow, even with Lightning

    HackSlashNov 24, 2025· 3 reactions

    @neo_neil_71 Yes there is. In ComfyUI use: ComfyUI-MultiGPU nodes. Put the text encoder on one GPU and the model on the other. It's what I do (4090 and 3090) and I get generations in about 20-30 seconds if I remember correctly. (12 steps)

    menatombocom419Dec 7, 2025

    Don't use a laptop or spin up on runpod

    menatombocom419Dec 7, 2025

    @HackSlash Same. 

    ReignShad0Dec 28, 2025
    CivitAI

    Hmm when trying out the Fp8 version with my normal Qwen image edit workflow for comfyui, I get nothing but black images.

    I'm unsure of what the problem is but this model fails in the same ComfyUI workflow that the standard QwenImageEdit normally works for...

    ReignShad0Dec 28, 2025

    Further testing seems to point out that Sage Attention is incompatible with this. Same workflow, but a restart without it enabled fixes the black outputs.

    easygoing0114
    Author
    Dec 28, 2025· 1 reaction

    @ReignShad0 
    Thank you for the detailed report!

    The Qwen-Image-Edit-2509_clear model has some layers tuned differently compared to the original, which can make the computation results slightly more unstable than the standard version.

    | Format | Sign | Exponent | Mantissa | Significant Digits | Precision |

    |--|--|--|--|--|--|

    | FP32 | 1bit | 8bit | 23bit | 7–8 digits | ✅ |

    | BF16 | 1bit | 8bit | 7bit | 2-3 digits | 👍 |

    | FP16 | 1bit | 5bit | 10bit | 3–4 digits | 👍 |

    | FP8(e5m2)| 1bit | 5bit | 2bit | 1–2 digits | ⚠️ |

    | FP8(e4m3)| 1bit | 4bit | 3bit | 1–2 digits | ⚠️ |

    This model's FP8 version uses the e4m3 format, which has a very narrow dynamic range (fewer exponent bits). When combined with Sage Attention—which introduces its own 8-bit quantization logic—it likely pushes the values out of range, resulting in NaNs (black images).

    Since the model was tuned using BF16 without Sage Attention, I haven't been able to ensure stability for "FP8 + Sage Attention" stacks.

    If you can tolerate a slight slowdown in processing speed, I recommend trying the Q8.gguf version—it should be significantly more stable.

    ReignShad0Dec 28, 2025

    @easygoing0114 Just tested out Q8_0 with ComfyUI gguf and unfortunately same issue with sage attention. Sadly I'm unable to test the BF16 version as well due to hardware limitations but I'll take your word for it.

    easygoing0114
    Author
    Dec 29, 2025

    @ReignShad0 
    Thank you for the additional report. It seems this model doesn't have great compatibility with Sage Attention.

    There is a newer version of Qwen-Image-Edit available: Qwen-ImageEdit-2511, and the ComfyUI repository also has an FP8 version published.

    https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/tree/main/split_files/diffusion_models

    I'm currently fine-tuning this model, but the 2511 version is already so highly refined that it barely needs any adjustments. How about giving this one a try?

    Checkpoint
    Qwen

    Details

    Downloads
    152
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/21/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    qwenImageEdit2509_q3KL.gguf

    Mirrors

    CivitAI (1 mirrors)