All images in the previews of all my models are made without embeddings, lora's, adetailer or controlnet.
Just a prompt and highres fix at 0.3 denoise.
2 test versions have been uploaded next to the original release.
These are more realistic, but have not yet been tested.
This model was put together to make anime like stuff
I use: CLIP 1
No baked in vae, I myself use:
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main
Feel free to use your own preffered vae!
Description
Added Icomix to the merge
FAQ
Comments (4)
After comparison I think clip=2 is the best. clip=1 shows more flaws.
I've never specified clip values, what do you mean by this, and how /when do you do it.. purely curious in order to get the best results from models and model merging.. tyty
@bleuropa329 CLIP skip is a setting you can change that will make stable diffusion skip one of the translation layers in the model.
There is a base anime model that I will not call by name that was trained on CLIP skip 2 and as this was the base for almost all anime models that means that many models have benefit from using it.
With regards to this model though, I myself have found that CLIP skip 1 (default in most UI's) works really well, especially for an anime model.
But always feel free to try other settings, what I said is there to guide not to force you to do anything.
@dddokter Even without NAI Clip skipping can be useful. I've found if your prompt isn't being respected.. you can experiment with CLIP SKIP 3..
YMMV but it can sometimes drop a bunch of diffuse noise that drags your image away from your prompt.



















