My models are just a mix of everything that you yourself can find on this wonderful site. I'm not chasing fame, I don't need your money, I just sometimes spend a few hours of my free time experimenting with my favorite models. The page with this model exists on this site only because once my friends asked me to share their mix. And because it's just damn convenient. Please don't accuse me of stealing someone's property. Considering what we're all doing here, that's a little ironic. In my mixes, I use such checkpoints as Luma, RevAnim, DDosmix, Perfect World, GalenaRedux, DarkSushi, CosmicBabes and so on. I don't even remember all the model names. I don't do anything that you can't do yourself.
Try, experiment, enjoy!
Here is the workflow for my pictures:
Sampler: DPM++ 2M Karras (because it runs faster on my weaker hardware, while giving a good result)
Base image size: 512x768 (because I don't have enough VRAM for more).
CFG Scale - 7. Steps - 30.
Default negative line: EasyNegative, drawn by bad-artist, sketch by bad-artist-anime, (bad_prompt:0.8), (artist name, signature, watermark:1.4), (ugly:1.2), (worst quality, poor details:1.4), bad-hands-5, badhandv4, blurry, child, loli, kids
Then I generate 5 pictures to identify any obvious problems, after that I make another 10-20 pictures, choose the best one from them and proceed to the upscaling stage.
I turn on the option Hires.fix
Upscale by - 2 (because I don't have enough VRAM for more)
Upscaler - 4xUltraSharp
Haires steps - 15 (half of the steps from the first stage)
Denoising strength - from 0.3 to 0.6 (the larger this value, the more detailed the picture will be, but the greater the risk of getting artifacts and distortions in those places that the neural network considers doubtful or incomprehensible)
Turn on ADetailer. The 1st is - face_yolov8.pt, the 2nd is the mediapipe_face_mesh_eyes_only. If necessary, I fill in its positive and negative lines, but more often this is simply not necessary.
PROFIT!
Description
FAQ
Comments (25)
Hey! Any chance for pruned models for v4.1 ?
What does pruned mean?
@Showbiz_CH Basically, reduced file size and you don't lose quality.
Mate, at least half (but i think much more) of your model base is Luma. I mean some version are twin-similar. WTF? Why you didn't mention it in description?
Hey.
Dear sadxzero, some versions of DarkSun do use your beautiful model as one of several. According to my calculations somewhere 56-57% in version 4.1, for example. I have never hidden that I just take a few models that I like and mix one with another until the result seems interesting to me. Actually, that's what it says in the description. For me, this is just entertainment to take a boring evening after work, nothing more. I do not advertise this model anywhere (version 2.0 is an exception, I will not lie here, I referred to it on reddit), I do not try to monetize and do not chase any goals or ratings. Often I don't even write down which models I mix. If it becomes easier for you, then I will leave a link to your checkpoint here without any problems.
@Iris_DS okay, sorry for aggressive approach, i didn't want to start any type of holy war, just didn't understand your moves at first glance. I guess now we cool, thank you for understanding, it's really good to have people like you in this community. You may freely use any content that i made already, just mention it if it's visually too obvious. And thank you for words also, its kinda flattering. Good luck with your experience)
@sadxzero What an amazing interaction. Well done to the both of you
This model 'recipe' is really great, according to my tests! The AI is powerful enough to learn over time, this will be even better! It would be so great if you could at least list some of changes between versions, like: 4.1 (added more training regarding taller woman, now it generates animes/2D better), 5.0 (added more training, it tends to have more contrast, detailed skin, etc). Thank you!
Since in most cases this is just a random mix of different models, I can not accurately determine the final differences between versions. I just play with the settings, weights, models, and if I like the result, I upload it here so that it would be convenient for my friends to download the model. That is, literally, I did not train this model on any specific data sets, I just took a few ready-made ones and mixed them.
Thank you, so it's up to us to find out.
Is there any chances to learn it more? Cause I need some model to be more androgene and in cyberpunk style to create supercreature :-)
Does the model need any VAE?
No, dosn't need.
4.1 or 5.0 ?
idk which one is more recent.
4.1 is recent
@fadedninna true, What is the difference then since I assume 5.0 is the most up to date model?
@SnG17 every version is a little bit different than each other
@SnG17 5.0 is more anime like and 4.1 is more american comic style
Hey man, great model! got a quick question in relation to your workflow steps, what's yolov8 and mediapipe_face_mesh_eyes_only? cheers
Adetailer extensions
here you go
https://github.com/Bing-su/adetailer
A little more info for you Shnoom: ADetailer is an extension to automatic1111 web-ui for stable diffusion. You can install it by going to the extensions tab in automatic1111, then the Install From URL sub-tab, and paste in the link from devilkad (https://github.com/Bing-su/adetailer) and click install. After installing, you'll want to restart automatic1111 (you can restart just the gui, but I find it easier to just close it and start it fresh). Once you have it installed, you can use ADetailer (which appears below the Hired Fix and Seed settings in txt2img and img2img) to some post-processing of your image. It's automatic and will apply to all your image generations when you have it enabled. There are 3 tabs within ADetailer where you can choose what you want it to look for and do it's inpainting on - the most common is yolov8 and mediapipe face and eye inpainting. The filenames kinda speak for themselves. There's ones in there for hands too, but all I've ever been able to get out of the hands inpainting was better looking but still deformed hands! LOL - For faces it works great, for eyes it works pretty well. I find that just doing the face tends to fix the eyes too so there usually isn't any need to do both. A lot of times I let it do both because I'm letting it run overnight generating images, so why not let it take its time and make their eyes a little better? Anyway, I hope that helps. ADetailer tends to fix faces better than the Restore Faces option built into automatic1111. Also, it's fair to mention that you generally don't need to mess with any of the other settings in ADetailer to just fix faces. Just select one of the face models from the drop-down and leave the rest at it's defaults. I've only had to change the defaults once or twice when I was running very high steps and very high hires fix denoising - I had to turn down the denoising in ADetailer or I would end up with tiny versions of a whole person inpainted over the face of my characters!
great model, use it quite alot its very stable and gives great results
As much as this seems to be the premium RPG DnD model, I found that my prompt image was walking around in a house of mirrors & I couldn't get the image I would see if I used on another model. There's no consistency. Awesome backgrounds, no doubt. Beautiful generations but unreliable nonetheless.
Please improve your model bcuz it's the best RPG DnD model on CivitAI.
Well, I started using it more and more over the last couple of weeks. This model is exactly what I was looking for, it gives me the style i need and it is pretty consistent. Amazing, 10/10
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.












