CivArchive
    FLUX + Hyper Lora 8 Steps + LLM + Detailing + Inpainting + Upscaling + Low Vram - v3.0
    NSFW
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    Made with Hyper Flux 8steps and Flux Dev Q4_0.GGUF. The main goal is to use FLUX with 8GB VRAM (my own configuration).

    Multiple functions in the workflow are controlled by a switch in Group 0 (the black one).

    1- Inpainting: it offers three options: (SAM Detector + Custom Mask or Only Custom Mask or Mask Uploaded)

    2- Prompt Generation: A prompt can be generated either through text or an image via a LLM.

    3- 1st Detailing: it is performed with FLUX.

    4- Upscaling: it is done using the Tile Diffusion Node, SDXL Lightning, and CN SDXL Tile. It operates quickly and produces stunning results.

    5- 2nd Detailing: a second pass is done with SDXL/FLUX for the background and the main subject.

    6- Post-Processing: a final step of post-processing is done to adjust brightness, contrast, etc.

    Make sure to download the exact models shown in the workflow. I selected them specifically to work with my 8GB VRAM card.

    Models:

    - flux1-dev-Q4_0.gguf

    - Hyper-FLUX.1-dev-8steps-lora.safetensors

    - t5xxl_fp8_e4m3fn.safetensors

    - 4xLeexicaDat2_otf.pth

    - 4x-UltraSharp.pth

    - 4xFaceUpDAT.pth

    - dreamshaperXL_lightningDPMSDE.safetensors

    - ttplanetSDXLControlnet_v20Fp16.safetensors

    Please, note the package required for Ollama is ComfyUI Ollama by stavsap.

    ComfyUi-Ollama-YN is not required and create conflict with the Ollama nodes.

    Description

    I fixed the upscaling group

    FAQ

    Comments (5)

    SantaonholidaysSep 7, 2024
    CivitAI

    Great and all but how do i add a extra model for llama? xD

    Akumetsu971
    Author
    Sep 7, 2024

    Go to your cmd windows -> ollama pull <name of model>

    For example:
    ollama pull llama2

    SantaonholidaysSep 7, 2024

    @Akumetsu971 I did that but the extra model below is missing

    mrkkkkSep 16, 2024

    @Santaonholidays do you mean you want to use the img to prompt function?
    if so, download a vision model from ollama library, by now seems minicpm v2.6 is a good option

    officer_mcvengeanceSep 12, 2024· 5 reactions
    CivitAI

    Definitely need links to models

    Workflows
    Flux.1 D

    Details

    Downloads
    412
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/6/2024
    Updated
    5/14/2026
    Deleted
    -

    Files

    hyperFLUXLLMDetailing_v30.zip

    Mirrors