Mix of pony with some stuff. It's an attempt at making pony more predictable and less dependent on schizo negatives without removing its comprehension and artist knowledge. Personally I'm using AutismMix_confetti for general use and AutismMix_pony for certain loras. If you want to train a lora on top of autism I recommend doing so in the AutismMix_pony version for better compatibility. The Lightning versions require specific settings to work, read the "about model" information under download.
What is the difference between the models:
AutismMix_confetti is a small amount of AnimeConfettiTune and AutismMix_pony. Has less style swing than pony and better hands. I prefer this one.
AutismMix_pony is a merge of ponyv6 with loras, its more compatible with certain styles made for the base ponydiffusion model.
AutismMix_DPO is AutismMix_confetti+DPO lora, made by request. Very similar to confetti version.
Add 3d to negs if you want a more traditional anime style. Quality tags should be same as ponyv6, but feel free to experiment: "score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_anime, BREAK"
From my testing schizo negatives and those negative embeds made for SDXL/pony make it worse, but do whatever you want.
If you have any issues running this model I suggest using this webui: https://github.com/lllyasviel/stable-diffusion-webui-forge
As well as this extension if you get noise outputs:
https://github.com/hako-mikan/sd-webui-prevent-artifact
Description
This is the 8 step Lightning version of AutismMix. It lets you make images with a lot less steps, you can read more about Lightning here: https://huggingface.co/ByteDance/SDXL-Lightning
This model will only work with the latest A1111/Forge webui (no idea about other UIs) using these settings:
Sampler: Euler A SGMUniform, DPM++ 2M SDE SGMUniform, DPM++ 2M SGMUniform, Euler SGMUniform. I use the DPM samplers.
Steps: 8-12~ I recommend 10, you can do more if you want.
CFG: 1-4 Increasing CFG will fry the image unless you add more steps. I recommend 2.5 if you're using loras.
FAQ
Comments (665)
Showing latest 142 of 665.
Probe este modelo para crear un gato saltando y funciono perfectamente
great
guys, i can`t get it: why should i use "scores" for XL version?
It's from the training method of original Pony. U can check it out on Pony's original description.
It's the way Pony was trained, it will get fixed in the v7 version.
the best checkpoint/base model in this site so far
Better checkpoint than Pony
Does every pony lora work with confetti version?
Is there a 12gb vram version?
Cuz when I try to use it, it says 14.00Mib etc low Vram
Is there some sort of list of the characters that are included in this model? And of what their tags are?
Hi, this might be what you are looking for : https://docs.google.com/spreadsheets/d/1m2W-pZEvHuEpfHcNHrxCSr-Aw1mgtUUYho6sz9LChEA/edit?pli=1&gid=0#gid=0
This one is the BEST checkpoint you can find. I am amazed by the quality of the images generated by it.
when i generate female character using the ponyxl version, the character waist appear to be very thin. anyone have an explanation for this?
it's probably something automatic inferred from other paramaters, maybe link a generated image with metadata so that ppl can see
There is a danbooru tag <narow_waist>. I put it in the neg prompt like this (((narrow-waist))).
There is also <long_torso> tag which can be put in neg prompt.
I'm using ComfyUI and these generally make ladies with more human proportions.
Weird scenario I only discovered while doing some tests on models I'm using. Every image I'm generating with AutismMix_pony is getting the word 'Ugly' appended to the beginning of the prompt, as though a style were being applied. This doesn't happen with any other model I'm using and I'm not applying any styles at all. I do see something similar with another model I've got installed but don't actively use, but it's tossing in a very different set of keywords. Has anyone else seen this behavior?
-Edit: Resolved! Apparently the 'Model Keyword' extension, which I did not install in Automatic1111, had been adding things like this to a variety of models. Disabling it restored expected behavior!
Hey i run models in jupyter notebook, how do i run this? is there a model card of sorts like huggingface with some pipeline code?
For me, one of the best
my favorite
Works well
awesome
Crazy how even after 10 months, this remains the best pony model :O
Why is it better
one of. probably most beginner friendly
neat
So my understanding of this is, it's very good at making freaky shit with reduced risk of making unintended..freaaakky shit
Works very well
One of my go to loras besides Pony diffusion
W
💯😍😍😍👏👏👏👏👏🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥
for some reason it works just perfect to generate online and makes some crappy quality pics if I use PC generation
I have no idea what I do wrong 😔
If by generate online you mean going on a website and generating there then it is because when you do that your image is generated by a different more powerful computer. When you use your own to do it through stable diffusion it will be worse
@nickbrandondown983 the power of a PC will have zero effect on the generated image's quality, only the time it takes to generate.
What's more likely is the way the model is being used, a website has likely spent time optimising their parameters in the background.
okay, if anyone was wondering how it goes (nobody I know)
I used ComfyUI and it wasn't working well. Obviously I need more skill to use ComfyUI.
I've got flawless results using Forge! I can't be more satisfied nw
Such a good model
Edit: I'm rewriting this to best explain my experience.
I was very confused on how to use this model. I've seen great images made using this model, but when I tried to use this model I got very low quality cartoon images. I did a lot of things that did and did not improve the quality of my results.
After some testing here's my findings (from my experience, yours and others may be different):
- Some loras just do not look good with this model. They may be compatible, but adding the loras gives worse results, this can be somewhat solved by lowering the CFG Scale (I did 3).
- Prompt size affects image quality. From what I've tried, shorter vague prompts give better results, and larger more detailed prompts give worse results.
Although it seems the negative prompt affects image quality more than the main prompt. A short main and negative prompt will give better results, a long main prompt and long negative prompt gives worse results, a long main prompt and short negative prompt gives average-good results (but I also encountered having a main prompt that isn't detailed enough gives bad results). So, so far and as far as I can tell, it's best to keep your negative prompt short, but you can more or less be free with your main prompt (to an extent and within certain rules).
- Changing the VAE does nothing or is barely noticeable.
- Lowering the CFG Scale gives better results, but obviously at the cost of how much it follows your prompts. The CFG Scale definitely affects the quality and style of the image. In my experience a CFG of 3 gives good quality results, a CFG of 10 gives at least average (but not as good) results but the colors are more saturated.
- The Sampling Steps does affect the image. From what I seen you can get a little more detail/thicker sharper lines with higher steps, but also the end result can be different (different body pose, how much it changes is inconsistent).
- The Sampling Method is most important but is also inconsistent. I was using DPM++ 3M SDE and my results were bad, I switched to Euler a and my results immediately got better.
What I found is that DPM++ 3M SDE at CFG Scale 10 gives bad results, but Euler a at CFG Scale 10 still gives pretty good results (but they get even better at lower CFG at the cost of how much the results align with your prompt).
DPM++ 3M SDE at CFG Scale 3 gives about the same results as Euler a at CFG Scale 3.
Other random Sampling Methods I tested give differing results. Some gave unusable results (warped/missing/additional body parts), some gave worse quality images.
Conclusion: Keep prompts as small as possible, especially the negative prompt, BUT you may have to expand the main prompt to get better results.
The CFG Scale affects image quality the most. Lower CFG = better quality wise, not great for getting the conceptual results you want. I tested 3 and 10, but 5 and 7 may or may not look good too.
I recommend the "Euler a" sampling method, just because it was the first one I tried that worked well and is usually the default sampling method for a lot of generators. Other sampling methods may work fine.
Sampling Steps does affect the image result somewhat, sometimes adding slightly more detail to lines and shapes and sometimes adding more color, as well as affecting larger details like body position/pose. I'd say just keep raising or lowering it to something that works well.
Some loras just do not look good (jagged lines, undefined shapes) unless the CFG Scale is low.
Update: I've found that the order (and existence) of prompts changes the image quality/style. I generated some images, looked nice. I add one more prompt (specifically the prompt "open mouth" to change the character's facial expression) and suddenly the style changes to one much more 2.5D and realistic with darker shades.
I moved the new prompt closer to the start and suddenly the artstyle is closer to the original style I wanted (there is still a slight change in style but less noticeable). I remain confused about how this model really works.
you must be doing something wrong. maybe your prompts have logic conflicts that burn the logic too mucg. from my experience longer prompts just start omitting things, but overall stay good.
dpm3m sde is a funny sampler. it has high highs and very low lows, but I don't think there's a sampler with higher highs. On that note, try dpmpp 2m, this (and euler a) really have the best balance, imo.
What does affect the quality in a huge way is the size of the negative prompt. Keeping it short is pretty crucial.
I didn't really test this model with cfg rescale but I find that pony largely doesn't need it and more of.ten it makes the results actually worse Illustrous benefit more from it. Maybe rescale is what is fudging your images.
@zekses I've done more testing. In short: the issue is most likely the prompt (but I will update if things go wrong again).
- Cutting down the negative prompt to as small as possible (to things like "score" and "source") is very important. With a large negative prompt I get more noise and bright red/blue/green colors.
- The format and size of the main prompt has less effect on the quality of the images? (unsure, based on instances where having shorter main prompts created better quality images than longer main prompts, but now suddenly those longer prompts aren't resulting in worse quality images anymore).
- Removing certain words definitely improved the quality of images, I assume such words were the sole reason things went bad because after removing them I went back to using DPM++ 3M SDE and CFG 10 and (although the Sampling Method and CFG still have a big impact on the quality of images) I no longer result in the poor quality cartoonish images I had before. Images look more in the style I wanted now.
- Side note: Adding "BREAK" has an impact on the results. In my experience: Adding BREAK to the prompt results in better quality images, but certain details are missing (incorrect physical appearance). Not having BREAK results in more accurate images (correct physical appearance), but the image quality is worse (splotchy lines and shapes).
Switching to a different Sampling Method and changing the CFG somewhat solves this quality issue (depending on the method and CFG, I went to Euler a and CFG 5), but has also resulted in the some details not appearing (correct physical appearance, incorrect clothing).
So for now I'm assuming that the image quality issue was resolved (I will know for sure when I go changing things again). But now I have an issue with image results adhering to the prompt.
@AICML unfortunately pony checkpoints have awful prompt adherence when more than one character is involved. you need to do region prompting to keep things both adherent and logical. otherwise the concept bleed between entities is pretty horrible
Question: I'm trying to change the skin color to purple, blue, or violet. nothing I seem to be using in the prompt, i.e. (purple skin color:1.0) seems to work. ideas or suggestions?!
please and thank you!
try "purple body"
works well!
Hi, do you recommand any resolution for better results ?
This checkpoint is great i use it everydays, confetti and pony version, but, also sometimes if we use too much confetti i don't know why images start to getting weird like a stained glass window, do you have a idea ?!
thx
AutismMix is an SDXL model. SDXL models are trained on a total of 2048 pixels. Use these resolutions for any SDXL model: 1024x1024, 896x1152, 832x1216, 768x1344, 640x1536
Found this model on the dreamerland AI tool. Can I use this model on dreamerland to generate images for commercial purposes?
Cool
Yeah
nice
Works well
Nice tool
works nice
Still the best pony model out there
good
works really well
Very nice tool
Can someone explain to me why 2 images are stored in the output folder when generating with Forge with “Hires. fix”? One has a small file size (for example 434KB) and one image with a larger size (8.65MB). If the larger image would have a better resolution and quality I could understand that. (small image “before” hires. upscale and the large one “after” the upscale). But both images have exactly the same resolution and I don't see any difference.
One is in jpg and the other in png, they are the same image different format
压缩率不同,质量肉眼难分辨
Imagine a version of this model, based in NoobAI-XL.
Thank you very much for your hard work on this. I'm super looking forward to trying this out! <3
Will you be making an Illustrious version?
Such cool model! Great Job!
Powerful and versatile.
Great results every time :)
Awesome
Is there a way to force this checkpoint to maintain its cartoon look? I feel like as soon as I add a lora with even the smallest amount of realistic data, the images it generates become skewed towards 2.5D. Source_anime doesn't seem to do the trick.
Some LoRas will do that regardless of what you do. Try source_cartoon as well as source_anime and put 3D, 2.5D, and realistic in the negative. You may also need to lower the weight of the LoRa. If it keeps doing that, then it's just not compatible.
I like this model very much
hi, I'm wondering something about the "BREAK", does this tag work with automatic1111?
Yes. Break is used to force a new chunk to load. here's a good explanation.
"In Auto1111, SD processes the prompts in chunks of 75 tokens. We all know that prompt order matters - what you put at the beginning of a prompt is given more attention by the AI than what goes at the end. But here's the thing: This rule isn't about the whole prompt, but for each chunk. The AI gives more attention to what comes first in each chunk. So if you have a very long prompt of 300 tokens or so, the attention will be highest on the first few tokens, then token 76-80, then 151-155, then again at 226-230 etc. Every 75 tokens, you get a peak of attention. Just a minor change in the order of your prompt around these points will matter a whole lot, but at other spots in your prompt the order will make very little difference."
@Gyer How does it interact with Regional Prompter, which uses BREAK to signify prompts for the specified regions?
@Gyer Wow, that IS a good explanation. Thanks. Got a link to the quote?
@Gyer Thanks for the explanation! But I have other question: how does it work? Do I have to write it in capital letters? And how do I have to use the order? "color, break, color" color, break color"?
Still one of the consistent greats.
The model is just super!!
Very strange issue occurred where one day, all prompts used with this checkpoint only suddenly started producing noise. No idea why it happened, it was working wonderfully only a few hours earlier. I had a picture with a beautiful seed and everything, then when I did the exact same prompt and seed and settings, now it only produced noise. Anyone know why?
All I did was close the sdwebui thing, and then open it a few hours later. I didn't change anything.
strangely enough, deleting and reinstalling the model worked... however, the file showed no signs of being modified for months.
I'm having the same issue, but only with certain loras
Why is confetti suddenly "not available for generation"?
guess it's gone for a week coz it's not among top 200 popular checkpoints? this system sucks…
So now I can't use this anymore? What kind of shit is this?
:( Can't gen onsite using it anyone (not among top 200 most popular.) Even though before the checkpoint bidding system it was always available
-_-
Oh noes, Confetti isn't on any more.. (auctions, I know^^). Confetti has been my standard/default Checkpoint here on Civitai and on mage.space in the past when doing catroony/anime/drawn/just testing stuff on PonyXL. Just wanted to say that I like it a lot. <3
Funny how people who used to payed for buzz used to use it on a service (generating) rather than using it to be able to use said service (stupid auction system.)
I am creating a picture of a man licking a girl's belly but it does not give the result I want. The man does not lick the girl's belly. I use the appropriate lora but it does not work.
ok
Is there a recommended VAE for this model? My outputs are coming out very orange/yellow saturated in img2img and I can't figure out why?
Try remixing other posts first
I think the VAE is already baked in. I am using AutismMix_confetti without VAE and it works fine. Maybe some other settings are responsible for the oversaturation?
So well done 💯💯
I see what you did there with your pfp and username, very clever lol
@xDegenerate I gotta keep it real + I already had the pfp so It would have been a sin not to name me this way 😂😂
very good
great job 👍
nice
this has helped me SO much more than u know😭
Nice picture
Please let us the model again
W model
Yes.
Nice model.
1
This model cant generate faces as well anymore
which one could?
Or hands/fingers - even with the negative hands embedding.
已下载大模型 反馈不错
very nice pastel like texture
Okay, time for the "x-rated" question. I get great results with this checkpoint using a couple of LoRAs (commix_style and comistyle, with a few details and age ones as well) so that my results look amazing.
I can also do x-rated stuff in a couple of positions (doggy, oral stuff) that look fine as well, but the second I try for missionary the woman's face distorts either badly, or into a Picasso-esque mess. Like totally ugly. Not changing any other parameters or prompts just removing one sex pose and substituting missionary.
Any suggestions to fix that? Thoughts?
The only thing I can suggest is grabbing either a missionary LoRA which might help define faces, or use character specific prompts or LoRAs wit varying strengths to try and force the female character to look a certain way.
(I find missionary specific LoRAs tend to work the best, you could experiment with combining several for different positions and "camera angles")
Great
Very good anime model. Thanks.
does this model not share the same base character with pony? sometime character wont appear on here... is there any autismmix specific character list set?
nice
amazing
Can't wait till i get a better GPU. Really wanna go wild with this model!
It says that AutismMix_pony is unavailable, is it being deleted?
@gooseymoosey The others are still generating some new posts using AutismMix_pony, but mine is still unavailable. Can somebody please how to fix this issue?
Meant to be over in 4 days
Hi question about AutismMix_pony, is this Checkpoint no longer free to use on Civitai, as over the past day it comes up with bid instead of create?.
Shame if so, because it's a great tool,
Thank you
Idk, regardless the bid meant to be over in 4 days
@JakeDean67926 Ok cool, I'll wait and see what happens towards end of next week.
What happened to it? Is it coming back?
We back friend, we got our mix back!
nice
pony模型的唯一真神
it's gone again...
forgot to bid on it this week... mb
See ya in 5 days, 19 hours, and 25 minutes
What even is Civitai without The Tism mix?
How did we get out bid?!
Welp see ya in 6 days, 21 hours, and 37 minutes
Stop bidding.
i am so tired of this shit ass bidding system man what even is the point
No more autism :(
For five minutes, could you please not bid, FOR FIVE MINUTES!!!!!!
i spend all my buzz until the auction began 🙏🙏🙏
looks like it’s ok to be used again!
How are some able to generate with AutismMix SDXL on site?
It says it's unavailable for generation.
Details
Files
autismmixSDXL_autismmixLightning.safetensors
Mirrors
autismmixSDXL_autismmixLightning.safetensors
autismmixSDXL_autismmixLightning.safetensors
autismmixSDXL_autismmixLightning.safetensors
autismmixSDXL_autismmixLightning8step.safetensors
autismmixSDXL_autismmixLightning.safetensors
autismmixSDXL_autismmixLightning.safetensors
autismmixSDXL_autismmixLightning8st.safetensors
autismmixSDXL_autismmixLightning.safetensors
3r2wrwffrwerfe4f_v10.safetensors
autismmixSDXL_autismmixLightning.safetensors
autismmixSDXL_autismmixLightning.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
