CivArchive
    Preview 31549759
    Preview 31549775
    Preview 31549781
    Preview 31549823
    Preview 31549845
    Preview 31549893
    Preview 31549912
    Preview 31549946
    Preview 31646669
    Preview 31646983
    Preview 31647739
    Preview 31648310
    Preview 31649530
    Preview 31649666
    Preview 31651161
    Preview 32173753
    Preview 32350655
    Preview 32366367
    Preview 32368774
    Preview 32369579

    original content based on model definition by @Defozo. this model will hopefully allow users to experience the fp16 model content on lower hardware spec'd computers.

    This is a merged model of a base model flux1-dev fp16 and NSFW MASTER FLUX LoRA : https://civarchive.com/models/667086?modelVersionId=746602

    Version 1.2 has added PussyDiffusion - Flux Lora merged in to improve vagina detail:

    https://civarchive.com/models/983498/pussydiffusion-flux

    Version 1.2 hyper 8step models have the content of version 1.2 plus ByteDance Hyper-FLUX Acceleration LoRA has been merged in allowing images to be generated in 8 (sometimes 9 or 10) steps:

    https://civarchive.com/models/691446?modelVersionId=774008

    NOTE: When ByteDance Hyper-FLUX Acceleration LoRA is merged into a model and the model is converted into fp8 precision, the conversion process screws-up the hyper 8step acceleration blocks from the lora. This causes the model to behave like a standard (non-hyper) fp8 model. The good news is it can still be included in the image prompt and still get the benefit of hyper 8step with fp8 model.

    If you like this content, try a version of my other model series:

    https://civarchive.com/models/967270?modelVersionId=1171422


    To
    convert and merge, I used scripts from:
    https://github.com/kohya-ss/sd-scripts/

    To convert to gguf images, I used scripts from:

    https://github.com/Zuntan03/EasyForge/

    Using --ratios 0.8 in the merging command for version 1.0 and --ratios 0.65 for version 1.1

    requires ae, clip_l and (t5xxl_fp8_e4m3fn or t5xxl_fp16) vae/text encoders

    creates fairly good nsfw content. no key words. tested with base flux image generation settings.

    NOTE: if you're not interested in clothing, use the version 1.0 model. it does a much better job with female anatomy. version 1.1 sacrifices quality to work better with clothed women.

    Description

    change nfsw master flux lora merge ratio from 0.8 to 0.65 to prevent nipple and vagina from bleeding through clothing.

    FAQ

    Comments (32)

    Visidious99Oct 3, 2024· 1 reaction
    CivitAI

    Great model thank you for sharing

    tedbiv
    Author
    Oct 3, 2024

    glad you like it.

    Starfish88Oct 5, 2024
    CivitAI

    I got "AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'" message when i run these version and the previous version too, any help plz?

    tedbiv
    Author
    Oct 5, 2024

    are you running on forge webui? model needs ae, clip_l and t5xxl_fp8_e4m3fn in vae/text encoder box and uses euler sampling method and simple or beta schedule type.

    Starfish88Oct 5, 2024

    @tedbiv Ya, I did that, but got another error: RuntimeError: stack expects each tensor to be equal size, but got [256, 4096] at entry 0 and [293, 4096] at entry 4 I didn't get what the issue? and yes I'm using forge webui

    tedbiv
    Author
    Oct 5, 2024

    @Starfish88 what size images are you trying to create? most of my testing was 1024x1024, 1024x1280, 768x1344, 896x1152. i remember getting that tensor error once... i don't remember what it was. make sure diffusion in low bits is set to automatic and cfg scale is set to 1

    tedbiv
    Author
    Oct 5, 2024

    are you able to run flux-dev-fp8?

    Starfish88Oct 5, 2024

    @tedbiv Oh it works fine now! I think it was forge webui error, all the setting was right. Thnx bro, great work.

    tedbiv
    Author
    Oct 5, 2024

    @Starfish88 no problem. glad it worked for you. enjoy... :)

    daniel691020314Oct 11, 2024· 1 reaction
    CivitAI

    ERROR:RuntimeError: Error(s) in loading state_dict for IntegratedAutoencoderKL: size mismatch for encoder.conv_out.weight: copying a param with shape torch.Size([32, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 512, 3, 3]). size mismatch for encoder.conv_out.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([8]). size mismatch for decoder.conv_in.weight: copying a param with shape torch.Size([512, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 4, 3, 3]).

    tedbiv
    Author
    Oct 11, 2024

    are you running on forge webui? did you use ae, clip_l and t5xxl_fp8_e4m3fn in vae/text encoder?

    daniel691020314Oct 12, 2024

    @tedbiv I've try to use it on Forge and ComfyUI,but bouth them don't work,only model 1.0 is working.

    daniel691020314Oct 12, 2024

    @tedbiv yes.ae, clip_l and t5xxl_fp8_e4m3fn,I'm all in.

    tedbiv
    Author
    Oct 12, 2024

    @daniel691020314 that's weird... the only difference in generation was lora ratio? although version 1.0 is better for nudes :) are you using any loras or just this model?

    daniel691020314Oct 13, 2024

    @tedbiv just only it,cause I love your model ! it's very beautifu l!!

    tedbiv
    Author
    Oct 13, 2024

    @daniel691020314 thanks. it looks like it's pulling something with fp32 weight. version 1.0 and 1.1 are both fp8. i'll have to do some digging. i remember getting a similar error when using fp16 text encoder...

    var9324695sOct 25, 2024

    where can i find the AE file?

    tedbiv
    Author
    Oct 25, 2024
    JannyLuxxOct 16, 2024
    CivitAI

    Could you also create a GGUF version?

    tedbiv
    Author
    Oct 16, 2024· 3 reactions

    i'm in the process of getting llama.cpp stuff up and running. currently the convert_hf_to_gguf.py script errors out saying it cant find torch module, but, pip list shows torch is installed... so i'm debugging. when i get the scripts running i'll create a q8 gguf version.

    tedbiv
    Author
    Oct 21, 2024· 3 reactions

    i haven't forgotten you... i've got the llama.cpp stuff built and running. now i'm running into issues with convert script not finding an associated config.json file for the created models. so i'm off trying to figure out how to create them... stay tuned. :)

    sss0611Oct 18, 2024
    CivitAI

    this msg keeps coming up on Forge

    AssertionError: You do not have VAE state dict!

    even I have the 't5xxl_fp8_e4m3fn.safetensors'

    plz help me~

    tedbiv
    Author
    Oct 18, 2024

    requires ae, clip_l and t5xxl_fp8_e4m3fn vae/text encoders

    2688732Oct 20, 2024· 1 reaction
    CivitAI

    The best NSFW flux model there is at the moment IMHO.... :-)

    tedbiv
    Author
    Oct 20, 2024

    thanks. if you're not using clothing use v1.0 it does better/more consistent nipples than v1.1. i'm working on trying to create a q08 gguf version, but i'm having issues trying to create config.json files for the base models... :(

    2688732Oct 20, 2024

    @tedbiv  Thanks for the info, Yeah a q8 version would be amazing!

    kaytransg196Oct 25, 2024
    CivitAI

    is there Quantized GGUF version?

    tedbiv
    Author
    Oct 25, 2024

    not yet. i'm working on it. i'm having problems with conversion scripts. they need config.json files for the model and i don't know how to create/what content is needed. it's a work in progress, i want to create a q08 gguf version...

    tedbiv
    Author
    Oct 25, 2024

    so i've created a q08 gguf version, but it makes crappy images when used with loras. is this normal behavior for gguf models?

    MrFlexOct 31, 2024

    @tedbiv in ForgeUI you have to select on the top the Diffusion in Low Bits Automatic Fp16 and it should work? i imagine comfyUI has something similar in some node

    tedbiv
    Author
    Oct 31, 2024

    @elguachiiii i'll try that

    tedbiv
    Author
    Oct 31, 2024· 1 reaction

    @elguachiiii yay! thank you, that worked... too many moving parts for my old brain.