CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    βš™οΈ Key Features of this Model:

    1. Advanced Merge Methods:

      • Uses techniques like weighted sum, interpolation, and selective update to deliver efficient and high-quality outputs.

    2. Layer Importance Optimization:

      • Early layers are optimized for speed, while the later layers are designed to enhance image quality.

    3. Dynamic Tensor Resizing:

      • Ensures seamless compatibility between different model tensors, making your setup flexible and robust.

      ****NF4 version****

    πŸŽ›οΈ Setting Up GGUF Support:

    -FOR ComfyUI"

    -FOR WebUI Forge is installed and properly set up.

    https://civarchive.com/articles/6715

    πŸ› οΈ Complete Installation Guide

    πŸ“ Setup Structure

    πŸ“‚ ComfyUI/
    β”œβ”€β”€ πŸ“‚ models/
    β”‚   β”œβ”€β”€ πŸ“‚ diffusion_models/
    β”‚   β”‚   β”œβ”€β”€ (basic)πŸ“„ bernoulli.gguf 
    β”‚   β”œβ”€β”€ πŸ“‚ text_encoders/
    β”‚   β”‚   β”œβ”€β”€ (basic)πŸ“„ clip_l.safetensors
    β”‚   β”‚   β”œβ”€β”€ (option1)πŸ“„ t5xxl_fp16.safetensors
    β”‚   β”‚   β”œβ”€β”€ (option2)πŸ“„ t5xxl_fp8_e4m3fn.safetensors
    β”‚   β”‚   └── (option3)πŸ“„ t5xxl_fp8_e4m3fn_scaled.safetensors
    β”‚   β”œβ”€β”€ πŸ“‚ vae/
    β”‚   β”‚   └── πŸ“„ ae.safetensors
    

    πŸ’Ž Essential Components

    This merged model offers a balanced solution for AI-driven image generation, emphasizing both speed and quality. Whether you're processing single images or large batches, it delivers high-quality visuals efficiently.

    πŸ”€ Text Encoders - The Brain Behind Natural Language Understanding

    Note: You only need to choose ONE of the T5XXL options below based on your hardware capabilities

    🎭 VAE - The Visual Artist

    πŸ‘¨β€πŸ’» Developer Information

    This workflow guide was created by Abdallah Al-Swaiti:

    For additional tools and updates, check out the OllamaGemini Node: GitHub Repository

    No alternative text description for this image

    Description

    FAQ

    Comments (9)

    amazingbeautyAug 29, 2024
    CivitAI

    is there any gguf q8 here :/ ? that bf not work with cpu..idk , that in q8 model you wrote bf16 in it;s tittle !?

    AbdallahAlswa80
    Author
    Aug 29, 2024Β· 1 reaction

    there many versions there , can you see there over show case images

    amazingbeautyAug 30, 2024

    @AbdallahAlswa80Β thank you , i pick the gguf q8 one as expect to be best of q4 q5 , but it show name bf16 i don't know it's wrong name or it's already bf16 that doesn't work with my setup .

    AbdallahAlswa80
    Author
    Aug 30, 2024

    @amazingbeautyΒ the bf16 not work but q8 work perfectly

    PirateGirlAug 29, 2024Β· 4 reactions
    CivitAI

    I just tested your version of Q2K and it's a lot better than city96's version. It actually makes coherent images that look acceptable (better than dev), at least on laptop screen. Probably runs on 4Gb vram and 16Gb system ram just fine. My system ram usage was under 16Gb all the time and I was running Chrome with 30 tabs and Opera over 10 tabs at the same time.

    AbdallahAlswa80
    Author
    Aug 30, 2024Β· 2 reactions

    what the good of this model it could be run beside others model smoothly, i can now use it with sdxl and video models smoothly , city96 is amazing guy , and he just applied gguf technique to the original models , here beside gguf techneque the mode it self cooked in many python codes to reach this power ,,,,please enjoy it and share with me your arts

    NeckRomancerDec 21, 2024
    CivitAI

    "advice :choose the model which has size of your gpu"
    Which size are you referring to? The raw file size?
    With a measly GTX 1060 6GB, what would be the best option?

    NeckRomancerDec 23, 2024Β· 1 reaction

    @AbdallahAlswa80Β Thank you for the reply!
    Unfortunately, I discovered that my current specs are too potato 😩
    It's no fun having to wait too long for results when iterating is so important.
    Some data is always being loaded into system memory and shuffled by the CPU with Flux on my machine, which makes things over 10 times slower.
    For the time being, I'll stick to SD1.5
    Thanks again, and happy generating :)

    Checkpoint
    Flux.1 D

    Details

    Downloads
    127
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/29/2024
    Updated
    5/13/2026
    Deleted
    -

    Files

    bernoulli_q2k.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)