Serenity: a photorealistic base model
Welcome to my corner!
I'm creating Dreambooths, LyCORIS, and LORAs. If you want to know how I do those, here is the guide: https://civarchive.com/articles/7/dreambooth-lycoris-lora-guide
I have a Buy Me A Coffee page if you want to support me ( https://www.buymeacoffee.com/malcolmrey ). You can also leave a request there if you want me to do something specific.
HuggingFace model card: https://huggingface.co/malcolmrey/serenity
Here is the process of how I did it:
I was using a script from this repository: https://github.com/Faildes/Chattiori-Model-Merger
and those were the commands:
python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "photon_v1.safetensors" "realisticVisionV30_v30VAE.ckpt" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v0"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "juggernaut_final.safetensors" "wyvernmix_v9.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v1"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "epicrealism_pureEvolutionV3.safetensors" "cyberrealistic_v31.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v2"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "analogMadness_v50.safetensors" "absolutereality_v10.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v3"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "icbinpICantBelieveIts_final.safetensors" "madvision_v40.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v4"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "perfection_v30Pruned.safetensors" "DriveE/civitai3/huns_v10.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v5"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "DriveE/civitai3/metagodRealRealism_v10.safetensors" "DriveE/civitai3/unrealityV20_v20.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v6"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "DriveE/civitai3/succubusmix_v21.safetensors" "reliberate_v10.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v7"python merge.py "WS" "C:/Development/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion" "DriveE/civitai/pornvision_final.safetensors" "DriveE/civitai/edgeOfRealism_eorV20Fp16BakedVAE.safetensors" --alpha 0.45 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-model-v8"python merge.py "ST" "D:/Development/StableDiffusion/SDModels/Models/merging" "merged-model-v0.safetensors" "merged-model-v1.safetensors" --model_2 "merged-model-v2.safetensors" --alpha 0.33 --beta 0.33 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-st-model-v0"python merge.py "ST" "D:/Development/StableDiffusion/SDModels/Models/merging" "merged-model-v3.safetensors" "merged-model-v4.safetensors" --model_2 "merged-model-v5.safetensors" --alpha 0.33 --beta 0.33 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-st-model-v1"python merge.py "ST" "D:/Development/StableDiffusion/SDModels/Models/merging" "merged-model-v0.safetensors" "merged-model-v1.safetensors" --model_2 "merged-model-v2.safetensors" --alpha 0.33 --beta 0.33 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-st-model-v0"python merge.py "ST" "D:/Development/StableDiffusion/SDModels/Models/merging" "merged-st-model-v0.safetensors" "merged-st-model-v1.safetensors" --model_2 "merged-st-model-v2.safetensors" --alpha 0.33 --beta 0.33 --save_safetensor --save_half --output "D:/Development/StableDiffusion/SDModels/Models/merging/merged-st-model-final"As you can see, first I merged 18 models using Weighted Sum, so there were 9 pairs.
Afterwards, I merged triples using Sum Twice which in turn produced 3 models that were again merged using Sum Twice to get the final product.
I've selected 18 models that are proven to work well with my LyCORIS and LoRA models. I've applied the same testing that I do to other models and I was happy with the results (not going to rank my model since I'm biased, but I was happy with the good-to-bad ratio).
Where this model shines IMHO are the eyes and mouth.
My long-term goals were (are?) to make a decent base model and then further fine-tune it on high-quality photographs of people. But with the SDXL on the horizon, I'm not sure when/if I will have time to do it (fingers crossed?) so at least I'm doing the first part - the base model as a checkpoint merge.
Description
This is Serenity v2 converted into diffusers (needed for diffusers/shivamshiraro/inb4devops/etc training). Some people also use diffusers as they allegedly require less VRAM.
FAQ
Comments (17)
So, Serenity has been updated to 2.1
I fixed the issue preventing it from conversion to diffusers (the diffusers version is now also available for those who wish to train using my method) which also prevented people from using it as a base for training in kohya_ss and other tools.
As a positive side effect - the size of safetensors dropped from 3gb to the standard 2gb :)
Sidenote, I have also switched from Serenity v1 to Serenity v2 as a base for dreambooth/lycoris.
You should be able to replace your 2.0 with this one, the generations that I've tested were all looking exactly the same.
Any chance of an inpaint version?
I mostly use SD for retouching and the built in checkpoint merger crashes on merge every time.
The 2.1 still crashes? (the 2.0 definitely had this issue).
I'll add to my list to try to make inpainting version, sure!
I know I promised it 3 months ago, but I should be able to do it soon (was quite busy/away in last months)
@malcolmrey no worries. life has a way of happening, in complete disregard of any plans or promises made.
Amazing model, one of my go-to models now!
Thank you! I'm glad you like it :)
If this isn't actually trained, and it looks like it's entirely a merge, you should really change the type to merged rather than trained.
What do you mean "if this isnt actually trained"?
By trained people mean "fine-tuned". Only Vanilla 1.4 / 1.5 models were trained, all others were fine-tuned.
Not sure which part of the description have you read but if you click on 2.0 and click on the version info (and not what you see on the main page) you will see the link to the article that describes the process of creating the v2 version: https://civitai.com/articles/3198
In short, it was additionally trained 3 times. I used 75.000 handpicked images for the additional training/finetuning over 300.000 steps.
@malcolmrey Thanks for your great efforts ❤️
@devilkad you are welcome! :-)
thanks for appreciating my work! :)
@malcolmrey Weird, I had completely forgotten about this (as I apparently never read my messages), and upon seeing the description, you're correct. Maybe I was just too stoned that night and misread/skimmed, my apologies!
A very good-quality excellent photorealistic model. With flexibility and versatility - complete order, perfectly responds to prompts, good anatomy and faces. Good job!
Thank you very much!
All what you mentioned is what I had in mind while making this model :)
Incredible work, @malcolmrey ! I have been following you for awhile. Question: your thoughts on SDXL vs SD 1.5? Curious if you've experimented with those models, and if you have found that the SD 1.5 base has been better for your realistic generations?
Hey hey! Thank you for the compliment! :)
I have indeed experimented, at least with training LoRAs. I have trained some and I will train again, but I'm in the middle of changing the setup and it takes a while. Also, I feel that my SDXL stuff is not as great as 1.5 so I don't really want to flood it, I'm experimenting with some options still.
At this point in time both 1.5 and SDXL have a really great ecosystem of base models and complementary Loras. I would say that some of the custom Loras are much better than those in 1.5 (I remember that I was in awe of how great the glass/crystal statues worked for SDXL, but there were many others).
So, both have merits and you can even combine them together (do low-res pass on one and then high-res pass on another, or perhaps a full scene with SDXL and ADetailer with 1.5)
If you do it locally then there is the question of your hardware. SDXL requires a better machine so some people can't really utilize full potential of SDXL (me, personally - I can't train Dreambooth SDXL on my main machine, I would need runpod or something)
To answer your question - I have definitely experimented more with 1.5 models. As you know, for my realistic generations I tend to use Serenity 2 which is my model, but there are other very good models as well. To bring the quality up (skin texture, some style or composition) I often add some loras.
For the realism of the faces - I mix multiple models (the default two models: lycoris + lora of a person) are fine enough, but for some people I use even more. I would say that both 1.5 and SDXL are capable of generating very realistic images. And both have their own unique look & feel (even the custom finetunes), a keen expert eye can often guess correctly which base was used.
I'm using 1.5 predominantly because the community here is still big, more people can generate stuff and I feel like my process generates better results in 1.5 (and that is my personal take, someone could be making better stuff in SDXL for various reasons [experience, better process for SDXL, etc]).
I hope this answers your question :)
Interesting model, but since I have mixed results (sometimes good, other times not at all, especially for faces and human figures in general), I would like to know what the "recommended settings" are for good results. I haven't found any information here or on the Hugging Face page.
PS: By "recommended settings," I also mean the sampler to use, the range of the CFG scale and sampling steps, and whether or not to use clip skip. And maybe some recommendations on prompting, on how to get good results. I hope I'm not asking too much!











