Version names includes the base model it was trained from, use appropriate version for each model. Rouwei version wont work well with noob, the opposite is also true. Old noob versions would be too weak on a fresh noob checkpoints also, eps and vpred should also be used with their dedicated versions. Don't forget to check version descriptions for more specific info and list of supported style tags.
Anima v4 info: Update for recent anima preview 3 version. List of styles is same.
Anima v3.5 info: Same version as anima-p2_v3, but trained of full range of blocks. Should™ just work better overall. Out of dataset test still fine. List of styles the same, grid
Anima v3 info: V3 is version trained from and for anima preview 0.2. Still won't affect data outside of dataset so much. Style list is same with v2, grid with styles.
Anima v2 info: A bit late, since 0.2 preview is already out, but whatever. It was trained from 0.1 version of preview model. Catastrophic forgetting was addressed and improved, at least it's not so disastrous now. Mixing between styles was also a little bit improved from empirical tests. List of all styles and grid. It would not work as well with 0.2 preview.
Anima v1 info: Anima version of lora comes with limited dataset. List of styles that comes with it. More like a test version. Model is in early stage and too incoherent at something above 1mp. If you decide to upscale, you would need to use tiled methods, unless you're okay with artifacts. Style tags should be invoked with @ before them. Mixing somewhat works, but not how it was with default xl weighting, rather like compel mode.
MIO - mix in one versions with mix of ai styles and some real artists in dataset.
BIO - best in one versions, containing only artificial images in dataset. 90% filled with naiv3 or styles from local models with very distinctive look only.
Description
Update for recent anima preview 3 version. List of styles is same.
FAQ
Comments (13)
Would it be possible for you to put the List of Styles/Triggers in Alphabetical order? Would be easier to go through that way.
@8chan
@afezeria
@aimpressionism
@alens
@alphonse (white datura)
@amindarano
@anonnoodles
@aruhshura
@balecxi
@bb_ta
@betabeet
@bigrbear
@brittle
@buruaka
@calculus_0001
@channel (caststation)
@chlenix
@cleandongye
@deadpurity
@doribae
@E_0x0194_type1_iloveclipleaking
@E_0x0194_type2_uniquestring
@E_0x0194_type3_whatevertowrite
@E_0x0194_type4_dontask
@eqota
@firedotinc
@fltccktl
@galawave
@gs-mantis
@half-ai_creator
@hllv
@honkail
@kumarang
@kyomu
@lativi
@lattekoi5252
@localame
@localnoodles
@lucifel99_type1
@lucifel99_type2
@memento_mori
@merratatustle
@merrytail_new
@merrytail_old
@namako daibakuhatsu
@nikukyu
@notamamizore
@nyalia
@nyalia_2025
@nyaqiq
@orenji (orenjipiiru)
@pnya
@qua_sho_dea
@ramwam
@rano_u_rabe
@rikku-143
@sabotenman
@sakurakono
@sbmssvmt_v2
@secret_style_test
@shimamuraa11451
@shiroski
@sho_sho_tidufa
@shutenanon
@shutenanon_v2
@sweetonedollar
@t2_bb
@tencarcyan_old_type1
@tencarcyan_old_type2
@tianliang duohe fangdongye
@trauter
@ugaaaa1
@uniunimikan
@xi410_type1
@xi410_type2
@yafkyu
@yukihotaru_type1
@yukihotaru_type2
@youlichu
@youmuanon_v2
@HaloSkull Okay, guess you already did it, I'll put that in a catbox then https://files.catbox.moe/lg5arc.txt
@bakariso I was more so thinking about when you add more or remove one haha. Thanks
@bakariso
I messed up the last four sorry.
@youlichu
@youmuanon_v2
@yukihotaru_type1
@yukihotaru_type2
@HaloSkull will just add few more lines of code to sort in the future, not a big deal
@HaloSkull run sorting myself, will just add that sorting and run it along with other stuff from now on https://files.catbox.moe/p28icg.txt
@bakariso Cool, Thanks!
This is masterpiece
Doesnt seem to work the same on a 4 step or a 12 step model, each prompt does get different results but not what you get at a full 20-30
This one? https://civitai.red/models/2560840/anima-turbo-lora well it's expected to not work properly with probably any distillation
That one and RDBT.
I wonder if there is a way to get it working.
@greyblades in the current state probably no chance, one lora overweights another. Only way is to retrain with the same config tdrussell was using

















