original content based on model definition by @Defozo. this model will hopefully allow users to experience the fp16 model content on lower hardware spec'd computers.
This is a merged model of a base model flux1-dev fp16 and NSFW MASTER FLUX LoRA : https://civarchive.com/models/667086?modelVersionId=746602
Version 1.2 has added PussyDiffusion - Flux Lora merged in to improve vagina detail:
https://civarchive.com/models/983498/pussydiffusion-flux
Version 1.2 hyper 8step models have the content of version 1.2 plus ByteDance Hyper-FLUX Acceleration LoRA has been merged in allowing images to be generated in 8 (sometimes 9 or 10) steps:
https://civarchive.com/models/691446?modelVersionId=774008
NOTE: When ByteDance Hyper-FLUX Acceleration LoRA is merged into a model and the model is converted into fp8 precision, the conversion process screws-up the hyper 8step acceleration blocks from the lora. This causes the model to behave like a standard (non-hyper) fp8 model. The good news is it can still be included in the image prompt and still get the benefit of hyper 8step with fp8 model.
If you like this content, try a version of my other model series:
https://civarchive.com/models/967270?modelVersionId=1171422
To convert and merge, I used scripts from:
https://github.com/kohya-ss/sd-scripts/
To convert to gguf images, I used scripts from:
https://github.com/Zuntan03/EasyForge/
Using --ratios 0.8 in the merging command for version 1.0 and --ratios 0.65 for version 1.1
requires ae, clip_l and (t5xxl_fp8_e4m3fn or t5xxl_fp16) vae/text encoders
creates fairly good nsfw content. no key words. tested with base flux image generation settings.
NOTE: if you're not interested in clothing, use the version 1.0 model. it does a much better job with female anatomy. version 1.1 sacrifices quality to work better with clothed women.
Description
same flux to lora ratio=0.65 of version 1.1. saved as fp16 then converted to nf4. at the request of user @AICuriousity22.
FAQ
Comments (30)
Be warned: v1.2 is in NF4 format instead of UNET.
Loading as UNET you will receive error like me.
UNETLoader
Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
......
Are you running on forge
@tedbiv ComfyUI. No problem with other checkpoint
are you loading the same way? this is not a unet only model. also, have you tried t5xxl_fp16 and 8 see if it behaves differently? i don't have comfyui installed. maybe it's time for me to take the plunge...
@tedbiv Yes, the only change made is selecting this checkpoint. No problem with v1.1. I also downloaded twice so it's not due to a corrupted file. A Google search gives me this page with similar error. https://github.com/comfyanonymous/ComfyUI/issues/4828
It seems to me that this file is saved in NF4 format hence the error.
@tedbiv Same error after I disabled any lora. In ComfyUI it clearly indicates that it's stuck at UNET loader with the red frame. Once it's stuck, it wont go any further in a workflow.
@shadowcliffs just reread your last msg. so this is a problem with comfyui and nf4? my image is the checkpoint version, so it should work.. is that how you read the last response? the unet only vers is about half the size, but vram signature is similar.
fyi there is no content difference between v1.2 and v1.1, other than nf4ing...
@tedbiv Got it. Better state somewhere clearly that this is NF4 instead of UNET.
@shadowcliffs isn't that in the 'about this version' ? also, i was worried about mixing nf4, gguf and regular checkpoints under the same model entry. glad it works... :) sorry for the inconvenience.
@tedbiv You are right, but we are spoiled. Check out this link and you will see how other checkpoints are labeled.
https://civitai.com/models/630820/flux-fusion-v2-4-steps-gguf-nf4-fp8fp16?modelVersionId=936309
@shadowcliffs thanks for the link. i didn't realize we could use non-numericals in version scheme.
updated description... :)
thanks
yeah baah, 1.1 was fine, 1.2 yeah NOPE :(
And yeah i have red it is NF4, so delete.
@LoneWoolfMan it is v1.1 content. others just asked for nf4. maybe uses less vram?
@LoneWoolfMan personally i prefer v1.0. it's much better for nudes. but that's just me.
@tedbiv Since you added NF4 to 1.2,you might want to add UNET to the label for 1.1.
@shadowcliffs v1.0 and v1.1 should be checkpoints not unets? maybe i don't understand...
@tedbiv They are checkpoints in the format of UNET, which is loaded with UNET loader.
@tedbiv Sorry, not sure if you ask if i have less vram or you telling me, that 1.2 uses less :D With a 4090 (waiting for the 5090 now), i can only get more vram if i put in another gpu.
But the problem was, in ComfyUI when i loaded the Checkpoint, i got a wall of error when i pressed Queue. And could not really figure out why, other than it must be NF4.
@LoneWoolfMan no, i was just responding to your statement about nf4. i find the regular v1.0 or the v1.4 gguf work pretty well.
SEEKING YOUR INPUT
should i create/post an fp16 nf4 unet only version of this content? filesize is ~6GB, vram signature similar to previous versions. being fp16 may give slightly better image quality from fp8. ymmv.
pls
fp16 nf4 no point, nf4 is the worst of all, if your planning fp16 go for full model or Q8 or Q5KS or something, all NF4 models i tried are way worse than even Q4 versions, fp8 is even way better than NF4 version
@elguachiiii fp16 q8_0 gguf of version 1.1 is uploading right now.
Your link to the script is broken:
https://github.com/kohya-ss/sd-scripts/
It seems like it's not closed and is including empty space and the word 'USING' which appears on the line below. Also, do you have this model multiple times on the site, or did someone rip someone else off?
https://civitai.com/models/796670/nsfw-master-flux-fp8-lora-merged-with-flux1-dev-fp16-saved-as-fp8
https://civitai.com/models/701671/nsfw-master-flux-lora-merged-with-flux1-dev-fp16
the first link is my creation converting to fp8, the second link is the location of the original fp16 content. does that answer the question?
@tedbiv Totally, no worries. I guess I've seen people add related models to a single page more often, is all :)
Great model btw!
@az420 np, i fixed the other thing. thx for catching it. probably bad naming convention...
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.