Eris, the Greek goddess of strife and discord.
This is my first pure 3D render/Cartoon model. When I started working on this model I had almost zero knowledge on writing prompts for 3d renders/CGI. As such, my main goal for this model was quickly realized - Make a model that accepts photorealistic prompts and returns beautiful 3D rendered outputs. Normal 3D render/CGI prompts also work with this model.
Important Settings
The following settings assume you're using webui by Automatic1111
USE CFG VALUES BELLOW 8*
This model does not respond well to CFG values above 8. With some caveats that will be explained bellow.
Do not go over 6 CFG with Euler a unless,
You are using Hires Fix. I found that you can go as high as 10 CFG with hires. fix enabled. This applies to all samplers.
DPM++ SDE Karras starts to degrade around 7 CFG if using
clip skip = 1, no hires fix. However, if you're using DPM++ SDE Karras withclip skip = 2the CFG values can go up to at least 10 CFG without significant (if any) degradation.General Advice:
Do Not exceed 6 CFG with Euler a, unless using hires fix.
Do Not exceed 7 CFG with DPM++ SDE Karras with clip skip set to 1, unless using hires fix.
You Can go up to at least 10 CFG with DPM++ SDE Karras with clip skip set to 2, without using hires fix.
I could not test all samplers. If you notice broken compositions or distortions; lower your CFG value and/or enable hires fix. My own personal settings for hires fix can be found bellow.
Thanks to Duo V2 on the Unstable Diffusion discord for helping me get to the bottom of this issue.
VAE: vae-ft-mse-840000-ema-pruned.safetensors
This is already baked into the model but it never hurts to have VAE installed.
Under
Settings -> Stable Diffusion -> SD VAE ->select the VAE you installed via dropdown. Then, uncheckIgnore selected VAE for stable diffusion checkpoints that have their own .vae.pit next to them
Clip skip : 2
This varies. Most of the time
Clip skip = 2works best with this model. In the event you're not getting the results you want. First, tryClip skip = 1.
ETA Noise Seed Delta: 31337
Found in -
Settings -> Sampler parameters -> Eta noise seed delta
Sigma Noise: 1
Sigma Churn: 0 or 1
Optional - When using Euler, Heun, DPM2 samplers try setting
Sigma Churn = 1Found in -
Settings -> Sampler parameters -> sigma churn
Optional Negative Textual Inversion (TI): bad_prompt_version2
link - https://huggingface.co/datasets/Nerfgun3/bad_prompt/blob/main/bad_prompt_version2.pt
Install to -
stable-diffusion-webui -> embeddingsIn webui - Add
(bad_prompt_version2:0.8)at the end of your negative prompt.
Optional Recommended Samplers:
DPM++ SDE Karras // 18-35 steps
Euler // 40 - 80 steps // Sigma Churn : 1
DPM++ 2M Karras // 40 - 70 steps
DPM++ 2S a Karras // 30 - 75 steps
Heun // 20 - 30 steps // Sigma Churn : 1
DPM2 // 30 - 75 steps // Sigma Churn : 1
These are just recommendations; feel free to experiment with different values and samplers.
Optional Hires. Fix settings (these are my go-to parameters):
Upscaler : R-ESRGAN 4x+
Denoising strength: 0.3 - 0.35
The rest of the settings are up to you. They don't really impact the quality of the output.
Note: Hires Steps, if you're using high steps for SD - You can choose a hires step value lower than your SD step value. This will increase the generation times.
For example; Let's say you're using Euler sampler @ 60 steps. You can then set the Hires Steps to ~40. Apposed to the default which is 0. Hires Steps = 0 means it will follow the same step value as sampler (Euler in this case). Lower Hires Step values will reduce generation time.
Optional Face Restoration:
I use CODEFORMER @ 0.8 strength. Whether or not you use face restoration is up to you.
Optional Quick Settings:
For easier access to some of the settings in webui, you can move some settings sliders/dropdowns to the main interface (beside model selection).
Quick settings location -
Settings -> User interface -> Quicksettings list
My Quick settings -
sd_model_checkpoint, CLIP_stop_at_last_layers, s_churnWhere:
CLIP_stop_at_last_layersis Clip Skips_churnis sigma churn
PDF documentation is in the works.
Check out my other models
SDXL
Boomer Art Model - https://civarchive.com/models/163139/boomer-art-model-bam
SD1.5
Doomer Boomer - https://civarchive.com/models/118247?modelVersionId=128239
Lomostyle - https://civarchive.com/models/109923/lomostyle
Based Model - https://civarchive.com/models/83991?modelVersionId=89262
Electric Eden - https://civarchive.com/models/64355/electric-eden
Cine Diffusion - https://civarchive.com/models/50000/cine-diffusion
Project AIO - https://civarchive.com/models/18428/project-aio
WonderMix - https://civarchive.com/models/15666/wondermix
Experience - https://civarchive.com/models/5952/experience
Elegance - https://civarchive.com/models/5564/elegance
VisionGen - Realism Reborn -https://civarchive.com/models/4834/visiongen-realism
LoRA
Pant Pull Down - https://civarchive.com/models/11126/pant-pull-down-lora
Questions or Feedback?
Description
Initial Release
What's included? [Original Name | What Civitai will rename them]
Eris_v1.safetensors | eris_v1.safetensors
Eris_v1.ckpt | eris_v1c.ckpt
Eris_v1-fp16.safetensors | eris_v1(1).safetensors
This model is mainly a trained model. Initial model used for training is made up of the following merge:
Rev_Animated - https://civitai.com/models/7371/rev-animated
sxdplus - https://huggingface.co/NerdAINerd/sxdplus/tree/main
ZombiMixV9 - https://huggingface.co/zombihed/ZombiMix-v6/tree/main
Unvail AI 3DKX v2 - https://civitai.com/models/2504/handas-3dkx-11
Training was resumed with Noise Offset enabled.
FAQ
Comments (16)
Nice detailed write up going on there. GJ. Looking forward to digging into this later on.
Great model as all the previous. Can you also add the 2gb safer safetensor format ?
The 3.95GB version in the dropdown (fp16 version) was as low as I could get the safetensor version without pruning. I'm not entirely sure why the ckpt version halved to 2GB and the safetensor didn't. I'll look into it 👍
Thanks for brining this to my attention.
Interesting.. can this do males?
You might need attention/emphasis; (male:1.2), ect. But it should be capable of generating male images.
This generates corrupted images. I assume that it will work well after changing the specified settings. But I will not make them, because I do not want to spoil the work of the other checkpoints.
Discovered the issue. It has to do with CFG values. Updated the models description to to reflect these discoveries. No settings should require being changed. Just simply lowering CFG values or choosing a different sampler.
Hi ndimensional, was reading the description for this, about the goal being for it to generate beautiful CGI / Render style images. If you had to ask me to pick a model that does that, I would go to one of my favorite models ever.... Experience... made by you. :-) If someone were to ask you to differentiate or break down how you'd compare Experience to Eris, what would you say?
Really gotta make a second paragraph to emphasize.... dude I love Experience (normal & Realistic!), it's so good!
Experience is pure Americana.
Eris is a cute & creative Japanese girl that moved to America and plays western RPGS.
I'm not sure if you wanted a more technical comparison (which I can do if needed), but I find it hard to quantify the subtle nuances with technicality.
A few technical notes:
1.) Eris was trained with Noise Offset, Experience was not. Meaning overall, Eris is going to have greater perceived fidelity.
2.) Experience was trained on batch processed HDR tone mapped images, Eris was not. This is what gives Experience it's HDR-esque shine. The caveat being, Eris somewhat makes up for this with the previously mentioned Noise Offset.
3.) Eris is a bit more complex, Experience is not. You can read through the description of this model to see all the oddities it brings lol.
4.) Experience (as of v7.5) has some autoencoder issues and doesn't always respond well to user prompts. Eris doesn't seem to have this issue.
5.) Eris generates better hands, simple as.
Now, that might sound like Eris is objectively better than Experience but that leads me back to not being able to quantify the subtle nuances with technicality. It really comes down to what kind of images you're trying to generate and which model responds the best to your prompts. General rule of thumb (not solid advice at all), if you're going for CGI/3D renders - Eris is probably the better model, as that's what it was trained for. If you try it and think experience did better, there's nothing wrong with switching back to experience.
Thanks for the kind words. Since you like experience.. I plan on updating Experience in the near future. It's on the top of my radar. Just want to make sure the upgrade is an actual improvement because truth be told - experience is one of my favorite models as well.
Hope this helped!
@ndimensional Thanks so much for the reply, yes this is exactly what I was looking for in terms of an explanation. I appreciate the short form and the more technical details as well. Thanks so much, keep up the amazing work! And I certainly appreciate your attention to detail with the logic behind updating Experience and your mindset with doing so.
@ndimensional I also just read all the details on the model here. Man, you really know your shit, I appreciate the technical details and can tell you've put the hours (and hours and hours) in. I appreciate you and your effort. Thanks so much. I'm hoping I'll be able to get such a sound grasp on this like that eventually.
@Balthazar99 No problem! Glad to help.
@Balthazar99 Thanks, I'm a deep learning engineer by trade so I have the benefit of working with the underlying methods/code of these models long before Stable Diffusion was created. If you're interested in the technical side of stable diffusion, there are some great resources online that go over the basics of Latent Diffusion Models (LDMs), Text Encoders (CLIP), and UNet blocks(type of Convolution Neural Network(CNN) used in stable diffusion). The basic principles behind all of this really boils down to mathematical theorem/algorithms (see, Markov Chain for an example of what "Steps" refer to).
If you're looking to better understand Stable Diffusion but are less interested in the underlying principles, the above isn't really needed. In both cases, the best way to learn is to experiment and garner experience; eventually with enough time, things start to come naturally and the skills you acquired can be transferred over into over fields or programs. Good luck, and keep at it!
Great model !
Can we have a colorful ver ?
Noise Offset makes this way too dark on average. Please leave Noise Offset out of checkpoints, we can use Lora and if in the future a better way of dealing with overall bright/dark is introduced, it will make baking in noise offset a bad idea.
My fav model so far ! Do u have a plan to update this model .





