Version names includes the base model it was trained from, use appropriate version for each model. Rouwei version wont work well with noob, the opposite is also true. Old noob versions would be too weak on a fresh noob checkpoints also, eps and vpred should also be used with their dedicated versions. Don't forget to check version descriptions for more specific info and list of supported style tags.
Anima v4 info: Update for recent anima preview 3 version. List of styles is same.
Anima v3.5 info: Same version as anima-p2_v3, but trained of full range of blocks. Should™ just work better overall. Out of dataset test still fine. List of styles the same, grid
Anima v3 info: V3 is version trained from and for anima preview 0.2. Still won't affect data outside of dataset so much. Style list is same with v2, grid with styles.
Anima v2 info: A bit late, since 0.2 preview is already out, but whatever. It was trained from 0.1 version of preview model. Catastrophic forgetting was addressed and improved, at least it's not so disastrous now. Mixing between styles was also a little bit improved from empirical tests. List of all styles and grid. It would not work as well with 0.2 preview.
Anima v1 info: Anima version of lora comes with limited dataset. List of styles that comes with it. More like a test version. Model is in early stage and too incoherent at something above 1mp. If you decide to upscale, you would need to use tiled methods, unless you're okay with artifacts. Style tags should be invoked with @ before them. Mixing somewhat works, but not how it was with default xl weighting, rather like compel mode.
MIO - mix in one versions with mix of ai styles and some real artists in dataset.
BIO - best in one versions, containing only artificial images in dataset. 90% filled with naiv3 or styles from local models with very distinctive look only.
Description
Same version as this one, but for noob epsilon 0.5 checkpoint. List of all styles in this version
FAQ
Comments (7)
Would it be possible for you to release a noob or Ilus lora of JUST brittle style? i find having so many in one makes colors and details very inconsistent
Separate lora will probably gonna work even worse, than one with bigger dataset
@bakariso really? you can make a 200mb lora JUST on brittle wich im sure its gonna be more than its share in the current lora and its not going to get mixed/blended with other styles tho
@msiaigens Size doesn't matter lol, some datasets will look great as a separate loras, some will not, but multiple datasets in one lora works better for style transferring. Are you using it with exact model it was trained from?
@bakariso dont get me wrong, it looks fine, its just that character lightings colors and eye highlights are very inconsistent, try hestia for example, even with same everything and just changing seeds the style detail level, color, shading, etc... changes a lot, to the point where ome look very different in style, i do use noobai., i even used noobai stabilizer and it helps a bit but not enough at all. tried every combinations of samplers, schedulers, denoise second pass, etc...
Didnt know that about loras, thought size meant quality, thats good to know, sorry for my ignorance
use another checkpoint bro
@Netryn i use the one listed where this lora was trained and the one brittle uses, so thats not it











