CivArchive
    GGUF_K: HyperFlux 16-Steps K_M Quants - Q5_K_M
    NSFW
    Preview undefined

    Warning: Although these quants work perfectly with ComfyUI - I couldn't get them to work with Forge UI yet. Let me know if this changes. The original non-k quants can be found HERE which are verified working with Forge UI.

    [Note: Unzip the download to get the GGUF. Civit doesn't support it natively, hence this workaround]

    These are the K(_M) quants for HyperFlux 16-steps. The K quants are slightly more precise and performant than non-K quants. HyperFlux is a merge of Flux.D with the 16-step HyperSD LoRA from ByteDance - turned into GGUF. As a result, you get an ultra-memory efficient and fast DEV (CFG sensitive) model that generates fully denoised images with just 16 steps while consuming ~6.2 GB VRAM (for the Q4_K_M quant). Also, the quality is pretty close to the original DEV model with ~30 steps.

    It can be used in ComfyUI with this custom node. But I couldn't get these to work with Forge UI. See https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050 for where to download the VAE, clip_l and t5xxl models.

    Advantages

    • Quality similar to the original DEV model, while requiring only ~16 steps.

    • Better quality and expressivity than the 8-step HyperFlux in general.

    • For the same seed, the output image is pretty similar to the original DEV model, so you can use it to do quick searches and do a final generation with the dev model.

    • Sometimes you might even get better results than the DEV model due to serendipity.

    • Disadvantage: requires 16 steps.

    Which model should I download?

    [Current situation: Using the updated Comfy UI (GGUF node) I can run Q6_K on my 11GB 1080ti.]

    Download the one that fits in your VRAM. The additional inference cost is quite small if the model fits in the GPU. Size order is Q2 < Q3 < Q4 < Q5 < Q6. I wouldn't recommend Q2 and Q3 unless you absolutely cannot fit the model in memory.

    All the license terms associated with Flux.1 Dev apply.

    PS: Credit goes to ByteDance for the HyperSD Flux 8-steps LoRA which can be found at https://huggingface.co/ByteDance/Hyper-SD/tree/main

    Description

    FAQ

    Comments (4)

    zerocool22Sep 3, 2024
    CivitAI

    And Q8?

    ProjectYarwoodOct 9, 2024
    CivitAI

    The K(_M) quants appear not to work on Apple Silicon (M1, M2, etc). The HyperFlux 16-steps works fine.

    karldonitz28599Nov 16, 2024· 1 reaction
    CivitAI

    I have been trying and comparing this flux Model and others for a long time, but this flux model is easy to use, has no problems when using many LoRa, and can produce good images. But I hope this flux model can be improved or updated again to be even better.

    Checkpoint
    Flux.1 D

    Details

    Downloads
    115
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/3/2024
    Updated
    5/12/2026
    Deleted
    -

    Files

    ggufKHyperflux16StepsK_q5KM.zip

    Mirrors

    HuggingFace (1 mirrors)

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.