Physiogen is my attempt at fine-tuning SDXL for NSFW use.
Remember....
Preview images are intentionally unedited as to show the actual output of the model.
Ensure you are using an SDXL LoRA. SD 1.5 LoRAs will not work.
Heavily weight your tokens if you aren't seeing desired effects. For example: (breasts:1.8), (large breasts:2.0). You will especially need this if you are trying to apply a style.
I'm finding that SDXL takes prompts quite a bit more literally. For example, if you are trying to generate a half body headshot, specifying pubic hair or ass/hips may cause the wrong style of photo to be generated.
You may use this in model merges, I just ask for credit.
Please read the release notes over there. 👉👉👉
Or up there 👆👆👆if you're on a mobile device.
If you would like to chat, please find me on the ✨ Civitai Discord ✨
Description
This is my first release of the Physiogen model.
Release notes
Trained on 114 images of nude women
No hardcore NSFW support
Tagged with unedited booru tags (1girl, solo, breasts, nipples, etc.)
You may see stretched out bodies past a height of 1280px.
Please think of this as an alpha or test version. Let me know in the comments what needs work!
Example captions from training images
1girl, solo, long hair, breasts, looking at viewer, black hair, navel, jewelry, medium breasts, nipples, standing, full body, nude, small breasts, barefoot, pussy, necklace, nail polish, completely nude, piercing, black nails, realistic, navel piercing
1girl, solo, ass, nude, lying, censored, pussy, spread legs, on back, blurry, pubic hair, uncensored, anus, blurry background, female pubic hair, close-up, clitoris, clitoral hood
1girl, solo, long hair, breasts, looking at viewer, smile, black hair, navel, brown eyes, medium breasts, nipples, nude, small breasts, pussy, spread legs, mole, lips, pubic hair, uncensored, anus, female pubic hair, spread pussy, mole on breast, realistic, nose, clitoris, mole on thigh
1girl, solo, long hair, breasts, looking at viewer, blonde hair, large breasts, brown eyes, medium breasts, nipples, upper body, nude, parted lips, blurry, lips, blurry background, tan, tanlines, realistic
FAQ
Comments (22)
great to see it worked now. you've set base model to sd 1.5, though fixed :)
Ahh!! I'm on it!
Should be fixed - good looking out @eurotaku!
Hmmm, it doesn't work with some loras. I tried Dana Scully lora for SDXL which works with other models very good. With Physiogen it generates images full of artifacts, blurred and looking like after 5 steps of generating. With same parameters and different model everything is ok.
I'll look into this! Thank you!
@hanskloss Which Dana Scully model are you using? malcolmrey's or ainow's? Are you able to provide an example prompt that you used?
Can you reach out to me on the Civitai Discord? I was using ainow's LoRA and it looks okay-ish. I wouldn't say it has a ton of artifacts or blurring. I'd like to see some of your outputs, as well as your example prompts. Thank you so much!
will you add pruned version? My pc crash out of memory everytime a try load this model on AUTOMATIC1111 =(
I have 16 GB RAM and 12 GB VRAM, is just not enough or I do something wrong?
are you able to load other SDXL models? They are all about the same size... 12GB VRAM should be fine.
@BilboTaggins this is my first time trying to load an SDXL; AUTOMATIC1111 is updated, when I loading the RAM will go full, if I try a second time pc just shut down and the blue screen will appears (tried many time, after restarting pc the second try in a row will crash):
Loading weights [24d9780ca7] from P:\AUTOMATIC1111\stable-diffusion-webui\models\Stable-diffusion\physiogenXL_v01.safetensors
Creating model from config: P:\AUTOMATIC1111\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Failed to create model quickly; will retry using slow method.
changing setting sd_model_checkpoint to physiogenXL_v01.safetensors [24d9780ca7]: RuntimeError
Traceback (most recent call last):
File "P:\AUTOMATIC1111\stable-diffusion-webui\modules\shared.py", line 633, in set
self.data_labels[key].onchange()
File "P:\AUTOMATIC1111\stable-diffusion-webui\modules\call_queue.py", line 14, in f
res = func(*args, **kwargs)
File "P:\AUTOMATIC1111\stable-diffusion-webui\webui.py", line 238, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "P:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_models.py", line 578, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "P:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_models.py", line 504, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "P:\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "P:\AUTOMATIC1111\stable-diffusion-webui\repositories\generative-models\sgm\models\diffusion.py", line 61, in init
self.conditioner = instantiate_from_config(
File "P:\AUTOMATIC1111\stable-diffusion-webui\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "P:\AUTOMATIC1111\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in init
embedder = instantiate_from_config(embconfig)
File "P:\AUTOMATIC1111\stable-diffusion-webui\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "P:\AUTOMATIC1111\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 428, in init
model, , = open_clip.create_model_and_transforms(
File "P:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 308, in create_model_and_transforms
model = create_model(
File "P:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 192, in create_model
model = CLIP(**model_cfg, cast_dtype=cast_dtype)
File "P:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 203, in init
text = buildtext_tower(embed_dim, text_cfg, quick_gelu, cast_dtype)
File "P:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 170, in buildtext_tower
text = TextTransformer(
File "P:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 541, in init
self.token_embedding = nn.Embedding(vocab_size, width)
File "P:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 142, in init
self.weight = Parameter(torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 252968960 bytes.
@joausyq12Â Try comfyui instead. It has much lower memory requirements and is more flexible (although has a learning curve). I think SD.Next also handles SDXL better too at the moment if you prefer the A1111 approach to the GUI.
@joausyq12Â do you have cuda installed with xformers turned on? I switched over to Comfy but my A1111 startup arguments are: --no-half-vae --medvram --opt-split-attention --xformers
I also could not load the base SDXL model in A1111 until i installed cuda and xformers.
p.s. I had to install Microsoft Visual Studio Code as well as it was a pre-requisite for Cuda.
p.p.s. you probably don't need --medvram, i am running with a 6GB VRAM card :P
COMFYUI:come here,my son......
@wyxzddsjj919Â I honestly need to switch to Comfy. I think once I got a workflow set up that was similar to automatic1111's, I could handle it.
This runs on my RTX 2060 6GB VRAM with 16GB RAM. It takes 2 minutes per 1080x1080 image and I can't use anything else on my pc, but it works. With ComfyUI. I didn't attempt Automatic1111
You currently need at least 32 GB RAM if you want to use SDXL with A1111, though I still run out of memory constantly and I can't have any other programs open. 48-64 GB RAM would be better.
However, ComfyUI uses less RAM and also less VRAM.
Hello, thank you for the great base model. Could you share more information on your training, please? For example What kind of software are you using to train the model with what settings? What are the image resolution that was trained on? Do you use taggs for the training images? What graphic card do you use? Do youuse regulasationimages? I think itwill be much helpfull for the community.
This was trained on runpod.io with an A100 GPU. Images were tagged with danbooru style tags (see the release notes on version 0.1). Image resolution varied, but the images used were large, high resolution photos. Trained with buckets enabled on kohya_ss scripts. I only use kohya's repo for training. I did not use any regularization images on this training.
Your model drastically reduces tan lines, but the results are less sharp and prompts with "Standing in the rain" or "Standing in a shower" have really bad water and water drops. FYI.
Thanks for the feedback - I'll look at improving these tags/scenes!
I've really been enjoying this model with SDXL. It seems like it has a great understanding of photography styles and paired with the NSFW aspects it's just smashing. It also has been working great with my own LoRAs. How hard would it be to add other poses like all fours, ass up or spreading legs?
I think it can be done. I'm working on a 0.2 version, which has more high resolution images, and will have some of the poses you're talking about. It will also have some more refined portraits/faces/bodies so it should have a bit more diversity when generating. Thanks for the feedback!













