original content based on model definition by @Defozo. this model will hopefully allow users to experience the fp16 model content on lower hardware spec'd computers.
This is a merged model of a base model flux1-dev fp16 and NSFW MASTER FLUX LoRA : https://civarchive.com/models/667086?modelVersionId=746602
Version 1.2 has added PussyDiffusion - Flux Lora merged in to improve vagina detail:
https://civarchive.com/models/983498/pussydiffusion-flux
Version 1.2 hyper 8step models have the content of version 1.2 plus ByteDance Hyper-FLUX Acceleration LoRA has been merged in allowing images to be generated in 8 (sometimes 9 or 10) steps:
https://civarchive.com/models/691446?modelVersionId=774008
NOTE: When ByteDance Hyper-FLUX Acceleration LoRA is merged into a model and the model is converted into fp8 precision, the conversion process screws-up the hyper 8step acceleration blocks from the lora. This causes the model to behave like a standard (non-hyper) fp8 model. The good news is it can still be included in the image prompt and still get the benefit of hyper 8step with fp8 model.
If you like this content, try a version of my other model series:
https://civarchive.com/models/967270?modelVersionId=1171422
To convert and merge, I used scripts from:
https://github.com/kohya-ss/sd-scripts/
To convert to gguf images, I used scripts from:
https://github.com/Zuntan03/EasyForge/
Using --ratios 0.8 in the merging command for version 1.0 and --ratios 0.65 for version 1.1
requires ae, clip_l and (t5xxl_fp8_e4m3fn or t5xxl_fp16) vae/text encoders
creates fairly good nsfw content. no key words. tested with base flux image generation settings.
NOTE: if you're not interested in clothing, use the version 1.0 model. it does a much better job with female anatomy. version 1.1 sacrifices quality to work better with clothed women.
Description
Same lora content as version 1.2 with the additional merge of ByteDance Hyper-FLUX Acceleration LoRA to allow image generation in 8 steps. converted to Q8_0 gguf format.
FAQ
Comments (13)
where i pute the gguf model?
i put mine in .../models/StableDiffusion/ in a directory i created /flux
Because I have small genitals, I am in a constant state of utter confusion.
How do I refer to these checkpoints in a Python script running on my CPU-ONLY system?
FYI I am allergic to Nvidia cards. It's a known medical condition... don't judge.
are you able to run existing flux models in your setup? i imagine cpu only would take hours to generate a flux image...
@tedbiv 1024x512 @ fp32 == 20mins per 2 images, 35mins for 4 images
I own a rack server found in a dumpster with 192GB of RAM.
It's why I wrote this; https://civitai.com/articles/8029/flux1-schnell-flux1-dev-on-an-elderly-computer
You're gonna be judged here, sorry mate. nVidia is the way to go for properly working with AI.
Will some day some FLUX NSFW Checkpoint be at the same lvl of a Pony SDXL?
i am waiting for a flux based pony... i don't know. the pony models are much more refined than base models.
any chance for a non-hyper version of the Q8 gguf? I think that's the most used quant, but you unfortunately don't have it available!
i thought i had a copy of it saved, but i can't find it. if you're interested i can recreate it...
let me test it, i'll post it tomorrow.
it's posted, enjoy.
heads up, the image uploaded was not correct. i am uploading the correct image today. sorry for the confusion.