CivArchive
    Super Simple GGUF (Quantized) Flux LoRA Workflow - Super Simple
    NSFW

    If your VRAM is insufficient for Flux, you need to run a quantized version. This is a really simple workflow with LoRA load and upscale. Keep in mind that the quantized versions need slightly higher strength values than the normal ones.

    This workflow is based on the GGUF model loader in ComfyUI:
    https://github.com/city96/ComfyUI-GGUF.

    Update:

    Added upgraded "Simple" version. It will requre 2 Custom Nodes to be installed. What is differrent in it:

    1. Added Multi-LoRA suppor with the rgthree LoRA stacker. This is the best pick for low end video cards I've been able to find.

    2. Added Civit-AI friendly file saver with the requred supporting nodes.

    3. Orginised everything in groups a little bit.

    It is still reqlly easy to use and now it is a good starting point for more complex Workflows as the generation info will be saved for Civit even if you do more complex operations.

    Description

    FAQ

    Workflows
    Flux.1 D

    Details

    Downloads
    1,445
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/18/2024
    Updated
    5/12/2026
    Deleted
    -

    Files

    superSimpleGGUFQuantized_superSimple.zip