ULtraReal (12GB) ft. Simv4 CLIP-G
Full Checkpoint do not load additional CLIP or VAE
This model uses Simulacrum CLIP at a custom weight
This model is slightly less realistic then SuperModel Edition but is more flexible with Anime and Manga
This model is excellent for Anime to realistic image to image.
ULtraReal 2 (SuperModel Edition)
This model uses FP32 Timestamp training
Realistic Faces and Characters across 100's of PONY trainings
Full FP32 precision (UNET can be downcast with no issues)
ULtraReal 8GB PONY
This model was trained using FP32 precision with a focus on realism
The hybrid version is 8GB but will run as fast as base PONY
FP32 CLIP is as fast and superior to FP16 in 99% of cases (CLIP is handled in CPU)
This model is intended for use with up-scaling (See images for workflow)
This model uses FP32 CLIP these commands should be used, this will not slow down your IT's unless you have very low system RAM
Comfy UI --fp32-text-enc
Forge/Auto1111 --clip-in-fp32
Version 1.0 is outdated and should not be used for most cases.
All images should be repeatable when loading the source image.
Note I normally try and credit image remix if you see "your" prompt in comment below and I will link your image.
Description
FAQ
Comments (16)
Hot damn. Great model for realistic images.
Thanks
@Felldude No problem. Also: thanks for the buzz you sent me for the benchmark images.
awesome, can we get a version for illustrious too? Goddess of realism was just posted for Illustrious, and it's looking great, but this one looks to be smoother, and less like film. which imo is great!
IMHO this is the best Pony Checkpoint. I just don't manage to train any Loras with it. I tried it with your version V2SupermodelED in Onetrainer. I put weight to FP32, Train data type to FP32, Clip Grad Norm to 2, And use AdamW (regular version) as optimizer. The results look overcooked and blurry before it learns anything useful (despite using the lowest learning rate). Any tips what I need to change in the settings to get a usable result?
Unscheduled Adam might do better, or try MSE vs MAE
Since it's a Pony model, does this now require more than 12GB of VRAM instead of the ordinary 8GB? Or it doesn't load it all into memory?
Asking because I only have 12GB of VRAM :P
It can partially load with the dynamic block loading torch came up around FLUX - also if you only use the FP32 CLIP command it is only 8GB or so
@Felldude It worked fine with my a1111 setup without having to change anything.
Though, the 4GB model you have from a while a go follows the prompt better, and idk what magic sauce you gave the 4GB model but it's also capable of understanding size difference like the height of characters. This 12GB model likes to fill the entire picture with whatever is in the prompt, and just what's in the prompt. Perhaps that's a skill issue on my part for writing short and lazy prompts but I like the fact that the 4GB model gets me. I still write prompts similar to the ones I used in SD1.5 xD
@punkbuzter340 Interesting, it could be the changes I made to the clip that cause the solo character focus
So what is the difference between this model and the new one in early access? https://civitai.com/models/1274490/ultrareal-2k
Does the new one have vae and clip included as well?
2k model was trained toward a higher native base resolution - The EA model uses base PONY CLIP 32, and standard FP32 SDXL VAE - It would be similar to the SuperModel ED with higher resolution training