!!! UPLOADING/SHARING MY MODELS OUTSIDE CIVITAI IS STRICLY PROHIBITED* !!!
Check my EXCLUSIVE models on Mage.Space: AniMage PXL • AniReal PXL • Lucid Dream • AniMage SD1.5 • Realistic Portrait
SDXL - Pony: AniVerse PXL • AniMerge PXL • AniToon PXL • AniMics PXL • AniVerse XL
SD1.5: AniVerse • AniThing • AniMerge • AniMesh • AniToon • AniMics
Also in Collaboration with Shakker.ai
This model is free for personal use and free for personal merging(*).
For commercial use, please be sure to contact me (Ko-fi) or by email: samuele[dot]bonzio[at]gmail[dot]com
⬇Read the info below to get the high quality images (click on show more)⬇
Aniverse - Pony XL - make the impossible possible!
This is a long shot project, I’d like to implement something new at every update!
The name is a merge of the two words: Animation and Universe (and a word pun: Any+Universe -> Anyverse -> Aniverse)
-> If you are satisfied using my model, press on ❤️ to follow the progress and consider leaving me ⭐⭐⭐⭐⭐ on model review, it's really important to me!
Thank you in advance 🙇
And remember to publish your creations using this model! I’d really love to see what your imagination can do!
Recommended Settings:
Excessive negative prompt can makes your creations worse, so follow my suggestions below!
Before applying a LoRA to produce your favorite character, try it without first. You might be surprised what this model can do!
Best CONFIGURATION that I found
VAE: a special VAE is already included, you don't need it - (Thanx to nuaion)
Upscaler: 4x-Ultrasharp or 4X NMKD Superscale
Clip skip: 2
Width: 768
Height: 1344
CFG: 5.5
Steps: 30
Sampling favorite method: DPM++ 2M or Euler Max
Scheduler: Karras
EXAMPLE OF GENERAL PROMPT:
POSITIVE PROMPT: score_9, score_8_up, score_7_up, (Type of Shoot), (Subject), the description, (background), 4n1v3rs3, more details
EXAMPLE: score_9, score_8_up, score_7_up, Portrait, yorha no. 2 type b, short white hair, black dress, hairband, clothing cutout, cleavage cutout, puffy sleeves, black hairband, feather-trimmed sleeves, juliet sleeves, mole under mouth, sunset, rubble in the background, 4n1v3rs3, depth of field, dynamic angle, fashion photography, sharp, hyperdetailed:1.15
NEGATIVE PROMPT: score_6, score_5, score_4, worst quality:1.4, low quality:1.4, front light, grayscale, doll, plastic, fake, ugly, hair on face, muscolar woman, low res, blurry, fat, topless, child
Stable Diffusion XL with only 3GB of VRam (nVidia GPU):
If you have a GPU nVidia, but do you have problem to run XL cause low VRam, try to use this version made by me and nuaion.
It is a Portable version, so it does not affect if you have an A1111 already installed on your PC and can easily work in parallel.Besides the fact that it is "movable" from PC to PC without problems.
This version does not have any pre-installed template nor ADetailer which we preferred to leave optional
Let me know if it works for you and what you thinkA1111 my settings:
I run my Home PC A1111 with this setting:
set COMMANDLINE_ARGS= --xformers --skip-torch-cuda-test --no-half-vae
(if you have low VRam, try to add --medvram-sdxl or --lowvram that can help you, but it slow down the image creations)
if you can't install xFormers (read below) use my Google Colab Setting:
set COMMANDLINE_ARGS= --disable-model-loading-ram-optimization --opt-sdp-no-mem-attention --no-half-vae
(if you have low VRam, try to add --medvram-sdxl or --lowvram that can help you, but it slow down the image creations)
My A1111 Version: version: v1.9.3 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 •
If you want activate xformers optimization like my Home PC (How to install xFormers):
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "xformers"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you can't install xFormers use SDP-ATTENTION, like my Google Colab:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "sdp-no-mem - scaled dot product without memory efficient attention"
Press in "Apply Settings"
Reboot your Stable Diffusion
How to emulate the nvidia GPU follow this steps:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Show all pages"
Search "Random number generator source"
Select the voice: "NV"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you use my models, install the ADetailer extension for your A1111.
Navigate to the "Extensions" tab within Stable Diffusion.
Go to the "Install from URL" subsection.
Under "URL for extension's git repository" put this link: : https://github.com/Bing-su/adetailer
Click on the "Install" button to install the extension
Reboot your Stable Diffusion
How to install Euler Max Sampler:In A1111 click in "Extensions Tab"
click in "Install from URL"
Under "URL for extension's git repository" put this link: https://github.com/licyk/advanced_euler_sampler_extension
Once installed click in: "Installed" Tab
Click in "Apply and quit"
Reboot your Stable Diffusion
Now at the end of the list of the sampler, you have the new sampler.
HiRes.Fix Setting:
I don't use Hi.Res fix because:
1) in my computer don't work
2) my models don't need it. Use txt2image, aderailer and the suggested upscaler in the resources tab.
If you still want use it, this is the setting sent me by MarkWar (follow him to see his creations ❤️).
Hires upscale: 1.5
Hires steps: 20~30
Hires upscaler: R-ESRGAN 4x + Anime6B,
Denoising strength: 0.4
Adetailer: face_yolov8n
How to install and use adetailer: Click Here
Here you have a review (in spanish) of AniVerse Pony XL Model (thanx to Salió Aniverse XL | Stable Diffusion en español)
Do you like my work?
If you want you can help me to buy a new PC for Stable Diffusion!
❤️ You can buy me a (Espresso... I'm italian) coffee or a beer ❤️This is the list of hardware if you are courius: Amazon Wishlist
I must thank you nuaion and GattaPlayer for their support
You are solely responsible for any legal liability resulting from unethical use of this model
(**) Why did I set such stringent rules? Because I'm tired of seeing sites like Pixai (and many others) that get rich on the backs of the model creators without giving anything in return.
(***) Low Rank Adaptation models (LoRAs) and Checkpoints created by me.
As per Creative ML OpenRAIL-M license section III, derivative content(i.e. LoRA, Checkpoints, mixes and other derivative content) is free to modify license for further distribution. In that case such is provided by licensing on each single model on Civitai.com. All models produced by me are prohibiting hosting, reposting, reuploading or otherwise utilisation of my models on other sites that provide generation service without a my explicit authorization.
(****)According to Italian law (I'm Italian):
The law on copyright (law 22 April 1941, n. 633, and subsequent amendments, most recently that provided for by the legislative decree of 16 October 2017 n.148) provides for the protection of "intellectual works of a creative nature", which belong to literature, music, figurative arts, architecture, theater and cinema, whatever their mode or form of expression.
Subsequent changes, linked to the evolution of new information technologies, have extended the scope of protection to photographic works, computer programs, databases and industrial design creations.
Copyright is acquired automatically when a work is defined as an intellectual creation.
Also valid for the US: https:// ufficiobrevetti.it/copyright/copyright-usa/
All my Stable Diffusion models in Civitai (as per my approval) are covered by copyright.
Description
My personal page for donation (and where you can buy my model in early preview): Samael1976 Ko-fi page
What's new:
Improved details
Improved backgrounds
Better depth of field
Minimal generation errors (on approximately 2400 images there will be 1% margin of error)
BEST CONFIG:
VAE: a special VAE is already included, you don't need it - (Thanx to nuaion)
Clip skip: 2
Width: 768
Height: 1344
CFG: 5.5
Steps: 30
Sampling favorite method: DPM++ 2M or Euler Max
Scheduler: Karras
EXAMPLE OF GENERAL PROMPT:
POSITIVE PROMPT: score_9, score_8_up, score_7_up, (Type of Shoot), (Subject), the description, (background), more details
EXAMPLE: score_9, score_8_up, score_7_up, Portrait, yorha no. 2 type b, short white hair, black dress, hairband, clothing cutout, cleavage cutout, puffy sleeves, black hairband, feather-trimmed sleeves, juliet sleeves, mole under mouth, sunset, rubble in the background, 4n1v3rs3, depth of field, dynamic angle, fashion photography, sharp, hyperdetailed:1.15
NEGATIVE PROMPT: score_6, score_5, score_4, worst quality:1.4, low quality:1.4, front light, grayscale, doll, plastic, fake, ugly, hair on face, muscolar woman, low res, blurry, fat, topless, child
Very Important Thing:
I finally found out what causes aDetailer errors: it's parentheses()
So if you want to avoid face or image generation errors, avoid using (or use them as little as possible) on both the positive and negative prompts
FAQ
Comments (50)
AMAAAAAAAAAAAZING!!!! Can't wait for 4.0 already :p
Thank you sooooo much ❤️
Thank you again for this amazing product. We truly appreciate every effort you put in making these. More power and more success to you!
Thank you sooooooooo much!!!
Best checkpoint on civitai and anywhere else. You have created a masterpiece, brother! :D
Thank you so much bro!🤗🤗🤗
My prompts works in Aniverse 2.0 perfectly, but does not work in 3.0, weird. (By the way, 3.0 is kind of hard to generate huge boobs lol)
What can I say about prompt? I really don't know... if I don't see your prompt... I can't see nothing.
Aniverse, also 1.5 is always thinked to have a small/medium breast (my personal choiche)
I don't have that problem in my generations but also am not sure if you are creating something really out of the norm. I have noticed that 3.0 seems to create dark underexposed images. If I add too many loras, I'll get a dark image where I wouldn't in 2.0. Also was unable to use the add_detail pony style loras and just went with hyperdetailed in 3.0. But regardless, this is definitely my favorite pdxl checkpoint by far. Thanks for the hard work @Samael1976
@erndiggity Thank you so much! I never used add detailer for pony, I will check it later ;)
@erndiggity I can generate super busty woman in 2.0, but can’t really do that in 3.0. As Samael1976 said, busty body shape is not what this checkpoint was designed for. Maybe 2.0 has more flexibility in body shape, just guessing. This is indeed a great model anyway.
Great model, makes really pretty girls.
Thank you so much :)
The only checkpoint I don't use any styles on, no LORAS and I get perfect results. I don't know how you've done it, but it's my main model from now on. Simply amazing.
It can generate any character, how did you do it, you magic man?!
lol, thank you. 🤗🤗🤗 no magic, i just added some characters to my training (i don't know if pony already had them or not) and some small adjustments on the unet training.
A question: Do I absolutely have to follow the order you used in the positive prompt example? It’s quite different from the way I’ve been doing it so far. How does this new structure benefit the prompt? Thank you very much in advance.
It is not mandatory, but the positive and negative prompt structure gives you a certain certainty of having creations in the native style of AniVerse. Obviously it is only recommended to have an output optimization, but nobody forbids you to change it
@Samael1976 Thank you for the response; I love this model, and the results it provides without LORA are incredible. Honestly, you're a genius.
@Carpincho Thank you so much 🤗🤗🤗❤️❤️❤️
Can we use the images we produce with this model commercially?
There are various scenarios.
I can tell you in general that for small or family-run businesses, the answer is yes, they can be used. I ask you to kindly include the image credits, including the model's name, my nickname and the address of my civitai bio page: https://civitai.com/user/Samael1976 then obviously, if the business were to go swimmingly, I would not disdain a donation through my ko-fi page: https://ko-fi.com/samael1976
Visto che siamo connazionali per una volta scrivo nella lingua madre, hai fatto un capolavoro di checkpoint, credevo che i trigger con i valori fossero dei LoRa da aggiungere e invece è all included, mi azzeccherò tantissimo xD Ma una lista dei trigger da poter attivare? (Magari l'hai scritto e mi è sfuggito, nel caso perdoname)
Ma ciao!!! Finalmente un connazionale! Prima di tutto, ti ringrazio per i complimenti 🤗🤗🤗. Per quanto riguarda le trigger words (e anche la lista dei personaggi che utilizzo nel dataset del training), un po' per pigrizia, un po' perchè non ho mai tempo (tanto per farti capire ora sto facendo AniVerse XL 4.0, la versione non pony, la 3.0 è già pronta e verrà prima o poi pubblicata), il mio PC è sempre occupato tra training e creazione di immagini per creare le gallerie.... penso che (con qualche sporadico periodo di off) sono ormai 2 anni che è sempre acceso 24 ore su 24...
@Samael1976 Lo sto provando un po', ho avuto qualche difficoltà con le poses, ci ho aggiunto un LoRa che ho fatto io e ho risolto diciamo, in qualche modo si fa xD, però il training che gli hai fatto è meraviglioso, io ora sto iniziando a capirci qualcosa ma tipo Automatic sta su una VM che il Mac gna fa xD
@huchukato credimi che ci saranno miglioramenti nella 3.0 (ma che purtroppo sarà un disastro con le mani) la 4.0 nasce appunto per cercare ti tamponare il problema mani. In parte ci son riuscito, ma rimangono sempre un grosso problema. Ps: son due anni che metto via soldi per comprare un PC nuovo, calcola che ora sto un i7 di seconda generazione e una 2060 con 12gb di ram. Ora con il BF ho preso qualcosa, ma mi tocca aspettare il 2025 con l'arrivo della serie 5 di nvidia.
@huchukato non ti converrebbe spendere sti 12 euro al mese e farti google coolab per far girare forge o a1111?
@huchukato non ti converrebbe spendere sti 12 euro al mese e usare forge (identico a a1111) su google colab? Io lo utilizzo quando mi serve:
https://colab.research.google.com/github/RedDeltas/SDForge-Colab/blob/main/RedDeltasSDForge.ipynb
@Samael1976 Avevo messo Lighting AI che Colab se lo mangia ma madonnina se costa ahahah, qulla L4 si tirava giù token in un attimo, sopra avevo uno studio con Forge, uno con Comfy, uno con Ollama in remoto e altra roba LOL. Collab oggi l'ho aperto per Dreambooth ed è lento che ti passa la voglia, forse provo Paperspace, sembra fatto bene, tanto sempre un notebook Jupyter ti danno, io lo ignoro e mi collego da shell in ssh xD
@Samael1976 alla fine sto mettendo Forge su un notebook su Sagemaker, mo vedi AWS come mi fa i buchi in petto se sforo con le ore xD
@huchukato ahahahah io per il training solo col pc di casa (calcola che i miei training durano tra i 20/25 giorni continui) perchè su colab o runpid, mi costerebbero quanto il rene destro. Colab lo uso principalmente per fare generazioni di immagini
@Samael1976 Azz ok allora mi sa che mi conviene attendere il Mac Mini M4 xD
@huchukato ma perchè il mac? Ti serve una gpu importante, specificatamente una nvidia con tanta vram. Col mac ci fai poca strada mi sa
@Samael1976 Perché non ci faccio solo questo xD Perlopiù faccio musica e grafica con il computer e sto su mac da una vita ormai, comunque alla fine ho preso Colab Pro e anche spazio su GDrive e ci ho caricato il tuo checkpoint che mi sta facendo scendere un po' di santi perché na volta capisce che so 2girls e 10 no LOL xD
@huchukato lol, in effetti non ho mai fatto il training con 2 girls...
@Samael1976 Ops xD Se vuoi poi ti passo le millemila che ho generato per il training della versione successiva ashshshshs
@huchukato Ti ringrazio, ma non serve ;) nel senso che le creerò... la mia paura è che siano troppo complesse da far capire sia al wd14 (o altri) e capire che valore dare al LR dell'optimizer e della unet per ottimizzarli
@Samael1976 Nel senso fargli capire che ci sono più personaggi nell'immagine? Io ora sto facendo un LoRa con le immagini che ho creato in questi giorni, vediamo se capisce meglio xD Quando finirà il training su Colab vedremo, sono giusto 250 immagini, che per un LoRa vanno bene credo
I must say, this is the best model I've ever used. Bodies are more rarely deformed than in other models. Sometimes you have to use Facefix if you want a full-body picture, but other than that, it's been excellent so far c: .
Thank you so much! For the face I prefere to use aDetailer, I think it's faster and better than facefix ;)
@Samael1976 After using it for a bit more, I agree :3 .Facefix tends to take away unique features of the face, but the aDetailer doesn't :D .
This is by far my favorite checkpoint to use! Works so well with other LoRAs and the style is impeccable. Keep up the good work!
Thank you sooooooo much 🤗🤗🤗🤗❤️❤️❤️❤️
Thank u for making this model it is perfect! Youre so talented! Please do not change it, it's perfect! :D
LoL, thank you but.... 4.0 is already ready (and I think that is better than 3.0), and now I'm working on 5.0 ❤️🤗😂
I adore this model <3
Really, thank you so much ❤️❤️❤️❤️
This is by far my favorite model.
Thank you so much!!!!
Amazing model! I havent had the need for Loras with this, it generates exactly what I want! The only thing I can’t seem to get is colored skin (green, blue, red, etc…) any suggestions? Ive tried things like “green skin, green face, green body, green girl”. Apart from that this is the best model Ive used.
I got the same prom, try with (green skin), or (green skin faace) or similar.
Thank you for your compliments and for your feedback ;)











