CivArchive
    FLUX.1 [dev] fp8 versions - Scaled fp8/fp8_e4m3fn/fp8_e5m2 - fp8_e5m2
    NSFW
    Preview 63020002
    Preview 63019992
    Preview 63019997
    Preview 63019996
    Preview 63019991
    Preview 63020000
    Preview 63020004
    Preview 63019993
    Preview 63019989
    Preview 63019988
    Preview 63019995
    Preview 63019994
    Preview 63019986
    Preview 63019998
    Preview 63019999
    Preview 63019990
    Preview 63019985
    Preview 63019987
    Preview 63021316
    Preview 63021242

    Update:

    I've added some other fp8 versions of FLUX.1 [dev] that aren't hosted on Civitai anymore, specifically fp8_e4m3fn and fp8_e5m2, in addition to the scaled fp8 FLUX.1 [dev] version I had originally posted.

    The fp8_e4m3fn and fp8_e5m2 models were originally uploaded by Kijai here on Hugging Face, where they note that E5M2 and E4M3 do give slightly different results, but it's hard/impossible to say which is better. E4M3 is what people are typically referring to when they say FP8.

    Here's some info from this Reddit post regarding fp8_e4m3fn and fp8_e5m2:

    FP stands for Floating Point. Any signed floating point number is stored as 3 parts:

    1. Sign bit

    2. Mantissa

    3. Exponent

    So number = sign * mantissa * 2^exponent

    E5M2 means that 2 bits represent mantissa and 5 bits represent exponent. E4M3 means that 3 bits represent mantissa and 4 bits represent exponent.

    E5M2 can represent wider range of numbers than E4M3 at cost of lower precision of the numbers. But the amount of different numbers that can be represented are the same: 256 distinct values. So if we need more precision around 0 then we use E4M3 and if we need more precision closer to min/max values then we use E5M2.

    The best way to choose what format to use is to analyze distribution of weight values in the model. If they tend to be closer to zero we use E4M3 or E5M2 otherwise.

    Original:

    I haven't seen this uploaded on here.

    This is the scaled fp8 FLUX.1 [dev] model uploaded to HuggingFace by comfyanonymous. It should give better results than the regular fp8 model, much closer to fp16, but runs much faster than Q quants. Works with the TorchCompileModel node. Note: for whatever reason, this model does not work with Redux nor with some ControlNet models.

    The fp8 scaled checkpoint is a slightly experimental one that is specifically tuned to try to get the highest quality while using the fp8 matrix multiplication on the 40 series/ada/h100/etc... so it will very likely be lower quality than the Q8_0 but it will inference faster if your hardware supports fp8 ops.

    From HuggingFace :

    Test scaled fp8 flux dev model, use with the newest version of ComfyUI with weight_dtype set to default. Put it in your ComfyUI/models/diffusion_models/ folder and load it with the "Load Diffusion Model" node.

    Description

    fp8_e5m2 version of FLUX.1 [dev]. This file was originally uploaded by Kijai here on Hugging Face.

    FAQ

    Checkpoint
    Flux.1 D

    Details

    Downloads
    174
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/11/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    flux1DevFp8VersionsScaled_fp8E5m2.safetensors