CivArchive
    Fluximation - v1_NF4
    NSFW
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    アニメ風のfluxモデルを目指しました。

    We aimed for an anime-style flux model.

    Description

    forgeで変換してみました。

    FAQ

    Comments (7)

    TimmekAug 28, 2024· 12 reactions
    CivitAI

    It's just a merge of lora and the main model. Why are you deceiving people by prescribing "checkpoint train" if it does not correspond to reality?

    Aikimi
    Author
    Aug 28, 2024· 8 reactions

    Are you an administrator of Civitai? Sorry, could you please show me the page where the definition is written?

    TimmekAug 29, 2024· 2 reactions

    @Aikimi я обычный пользователь, который не любит обман. Почему вы ставите "Trained", если по факту это Merge с lora?

    ```{

    "modelspec.resolution": "1024x1024",

    "modelspec.sai_model_spec": "1.0.0",

    "modelspec.merged_from": "Flux.1-dev, animeflux_lora_vN",

    "modelspec.architecture": "flux-1-dev",

    "modelspec.implementation": "https://github.com/black-forest-labs/flux",

    "format": "pt",

    "modelspec.date": "2024-08-17T05:14:00",

    "modelspec.title": "outputbf16"

    }```

    Aikimi
    Author
    Aug 30, 2024· 2 reactions

    @Timmek In my understanding, 'merge' refers to something created by mixing existing uploaded items.
    I don't have any strong preferences, so I'm okay with changing it if needed...

    NowhereManGoAug 31, 2024· 2 reactions

    The nomenclature of "Merged" vs "Trained" is ambiguous. It would probably have been better to called them "Original" vs "Remixed". Some of the SDXL/Flux models labeled as "Trained" are in fact made by merging a number of original LoRAs into a base.

    Besides, technically, training a LoRA is not all that different from training a fine-tuned model.

    Since the LoRA merged into Flux-Dev is in fact a trained original LoRA by Aikimi, I really don't see a problem here.

    443152Sep 23, 2024· 1 reaction

    You might only consider "full fine-tuning a checkpoint" as "checkpoint trained". However, if you train a LoRA and then merge it into a checkpoint, you could also call that "checkpoint trained"

    This is because training LoRAs and directly fine-tuning checkpoints are essentially the same in that they both require time and money. They are fundamentally different from "checkpoint merged", which only involves combining several existing checkpoints.

    LazmanSep 24, 2024· 4 reactions

    @Elysia_Saikou Really not sure what people be gettin so hot and bothered about. This is one of the only flux models small enough for a person with an 8gb GPU. So, I'm happy it exists so I can give flux a shot.

    Checkpoint
    Flux.1 D

    Details

    Downloads
    2,634
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/20/2024
    Updated
    5/17/2026
    Deleted
    -

    Files

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.