CivArchive
    FLUX.1 [dev] fp8 versions - Scaled fp8/fp8_e4m3fn/fp8_e5m2 - fp8_e4m3fn
    NSFW
    Preview 63017507
    Preview 63017518
    Preview 63017509
    Preview 63017508
    Preview 63017513
    Preview 63017514
    Preview 63017512
    Preview 63017517
    Preview 63017521
    Preview 63017516
    Preview 63017515
    Preview 63019570
    Preview 63017524
    Preview 63017525
    Preview 63017519
    Preview 63017510
    Preview 63017523
    Preview 63017520
    Preview 63017511
    Preview 63017522

    Update:

    I've added some other fp8 versions of FLUX.1 [dev] that aren't hosted on Civitai anymore, specifically fp8_e4m3fn and fp8_e5m2, in addition to the scaled fp8 FLUX.1 [dev] version I had originally posted.

    The fp8_e4m3fn and fp8_e5m2 models were originally uploaded by Kijai here on Hugging Face, where they note that E5M2 and E4M3 do give slightly different results, but it's hard/impossible to say which is better. E4M3 is what people are typically referring to when they say FP8.

    Here's some info from this Reddit post regarding fp8_e4m3fn and fp8_e5m2:

    FP stands for Floating Point. Any signed floating point number is stored as 3 parts:

    1. Sign bit

    2. Mantissa

    3. Exponent

    So number = sign * mantissa * 2^exponent

    E5M2 means that 2 bits represent mantissa and 5 bits represent exponent. E4M3 means that 3 bits represent mantissa and 4 bits represent exponent.

    E5M2 can represent wider range of numbers than E4M3 at cost of lower precision of the numbers. But the amount of different numbers that can be represented are the same: 256 distinct values. So if we need more precision around 0 then we use E4M3 and if we need more precision closer to min/max values then we use E5M2.

    The best way to choose what format to use is to analyze distribution of weight values in the model. If they tend to be closer to zero we use E4M3 or E5M2 otherwise.

    Original:

    I haven't seen this uploaded on here.

    This is the scaled fp8 FLUX.1 [dev] model uploaded to HuggingFace by comfyanonymous. It should give better results than the regular fp8 model, much closer to fp16, but runs much faster than Q quants. Works with the TorchCompileModel node. Note: for whatever reason, this model does not work with Redux nor with some ControlNet models.

    The fp8 scaled checkpoint is a slightly experimental one that is specifically tuned to try to get the highest quality while using the fp8 matrix multiplication on the 40 series/ada/h100/etc... so it will very likely be lower quality than the Q8_0 but it will inference faster if your hardware supports fp8 ops.

    From HuggingFace :

    Test scaled fp8 flux dev model, use with the newest version of ComfyUI with weight_dtype set to default. Put it in your ComfyUI/models/diffusion_models/ folder and load it with the "Load Diffusion Model" node.

    Description

    fp8_e4m3fn version of FLUX.1 [dev]. This file was originally uploaded by Kijai here on Hugging Face.

    FAQ