works well with natural language with some tags at the end. use updated v1.5 vae (or whatever vae you want, i'm not your mom).
trigger words aren't need imo. I also recommend a low cfg value like 3.5 or 5. 512x768, 512x512, or 768x768 likely works best, then use hi-res or img2img to get a higher resolution. model was not created with nudity in mind, but it can be unintentionally horny sometimes, so prompt accordingly (simpler prompts get hit harder by this, so be descriptive in what you want).
images generated with DEIS using onnxdiffusersui with max attention slicing, so you probably won't be able to recreate them with auto1111 due to how it handles seeds. quality should be comparable though.
Description
not tested much yet, but results seem good.
FAQ
Comments (21)
Please indicate in the tags that this is NSFW, porn or erotica.
ive marked the images that are nsfw as nsfw. this model isn't specifically for nsfw content, and doesn't use nsfw oriented models.
Hey, love this mix! Would you share the reddit post for that merge method?
https://github.com/xzuyn/OnnxDiffusersUI/blob/main/triMerge.py
modifed the code from here to take safetensors.
https://www.reddit.com/r/StableDiffusion/comments/1012lto/proposal_of_model_merging_technique_that_should/j2sf2kt/
Tried to look for DEIS sampler but couldn't find anything about it, can you share some info on that? would love to give it a try.
it was added in diffusers version 0.12.0, it is similar to the dpms schedulers, but better imo.
https://huggingface.co/docs/diffusers/api/schedulers/deis
@xzuyn I see, sorry to bother you again with this, im quite new and hard to find info. You know if it can be used on automatic1111? after updating webui and extensions still not there. Thanks for the info mate
@mctwost i can't find anything about automatic1111 adding it, so unless you know how to edit the code to accept "DEISMultistepScheduler" as a scheduler, you won't be able to use it yet.
I don't use a1111 for anything other than merging checkpoints atm, so if you want you can make a feature request for "DEISMultistepScheduler" here https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/new/choose
Have an unpruned version for merging our own trainings?
v1.1 and v1.2 already are unpruned. the 4gb versions are unpruned and should contain ema weights, and the 2gb versions are pruned fp16.
@xzuyn Weird, and interesting! I restarted my computer and A1111 and merged again, and now they seem to be working. Okay. Whew!
@eta weird. i cant do training so i haven't tested it. the 4gb model is the same model that came out of the merges. maybe the trimerge method messes with it. sad because i feel like this would be a decent base for training on.
@xzuyn I edited my reply, something was being weird on my end it seems, whew!
@eta oh thats great. glad i can use this to train on once training becomes possible for amd users.
let me know if there are any other high quality trained models, and i will test out a merge with them.
how do I use the 1.5 vae?
if you use automatic1111, rename the vae to the same as the model and put it in the same folder as the model.
if you use onnxdiffusersui, you need to convert it with the model using --vae_path with https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py
im unsure how to do it with other softwares.
Hi,
very interesting. could you pls share the exact workflow? where to put the two images that will be merged in auto1111?
I didn't use A1111, and I didn't merge any images. I used a script I found on reddit to merge 3 models at a time; https://github.com/xzuyn/OnnxDiffusersUI/blob/main/triMerge.py
That's the modified the code from here to take safetensors; https://www.reddit.com/r/StableDiffusion/comments/1012lto/proposal_of_model_merging_technique_that_should/j2sf2kt/
Merge recipe:
1. wavymulder_AnalogModelPortrait_v1-0 = (AnalogDiffusion_v1-0 + ModelShoot_v1-0 + PortraitPlus_v1-0 @ triMerge)
2. wavymulder_MegaMix_v1-0 = (wavymulder_AnalogModelPortrait_v1-0 + TimelessDiffusion_v1-0 + LomoDiffusion_v1-0 @ triMerge)
3. PhotoMerge_v1-2 = (wavymulder_MegaMix_v1-0 + Dreamlike-Photoreal_v2-0 + RealisticVision_v1-3 @ triMerge)
@xzuyn thanks! i want to merge two images, so this wont be compatible for that ? cause in the images here, i see elements of one image in the other. or is that a coincidence?
@gagan602 the inpainting examples?
Details
Files
photomerge_v12-inpainting.yaml
Mirrors
juggernaut_v21-inpainting.yaml
juggernaut_aftermath-inpainting.yaml
photomerge_v12-inpainting.yaml
juggernaut_v2-inpainting.yaml
uberRealisticPornMergeURPM_urpmCE3Alpha.yaml
photomerge_v10-inpainting.yaml
uberRealisticPornMergeURPM_urpmCE2Alpha.yaml
uberRealisticPornMergeURPM_urpmCE1Alpha.yaml
uberRealisticPornMerge_urpmv12-inpainting.yaml
animerge_v10-inpainting.yaml
juggernaut_final-inpainting.yaml
juggernaut_v19-inpainting.yaml
uberRealisticPornMerge_urpmv13Inpainting.yaml
juggernaut_v18-inpainting.yaml
juggernaut_v15-inpainting.yaml
juggernaut_v16-inpainting.yaml
juggernaut_v17-inpainting.yaml
juggernaut_v22-inpainting.yaml
juggernaut_v14-inpainting.yaml
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

