NEW: ANIMA (anima base preview 3) version! No trigger word.
Supports both natural language prompts, booru prompts and a mix of both (see example images for prompts)
Style LoRA trained on on NoobAI-XL vpred 1.0 with images from the artist 2n5 / mikenko2222, most known for drawing traps / otoko no ko's.
Version 3.0 does not require any triggerword. older versions do, (jz235)
It works on mentioned model and its mixes:
https://civarchive.com/models/833294/noobai-xl-nai-xl
Negatives and quality tags are optional, but are not necessary at all.
v3.0 is new and more faithful to 2n5's more modern style. it works well with applying style to things that aren't part of it's training/dataset. I recommend this one.
"But mipmup!" You might say. "Why did you train this??? NoobAI already knows 2n5 and there are other 2n5 LoRA's alrea-"
NoobAI knows 2n5 already, but its mostly his older works. And the existing LoRA's here on civit suck, with worse training settings and they also probably used most of 2n5's older art when training. What I wanted was a LoRA that could replicate his newer more modern style.
Description
slightly larger dataset (40 images) - first autotagged and then manually checked, removing unnecessary tags/adding missing tags to images as needed.
pre-downscaled images before training using custom image downscaling script
sd-scripts does this already, but I wanted to achieve a higher downscale quality, using a script that downscales the images using a combination of Lanczos (40 %) and LSID (60%) algorithm by blending images from both algorithms together at mentioned percentages.
LSID from paper: https://arxiv.org/abs/2510.24334
enabled flipped arguments, as there were no images in dataset where flipping the image would cause symmetrically issues and to diversify dataset
using edm instead of minsnr for timestep correction/training
final lora model is a 50/50 blend of epoch 56 and epoch 60, using the lora-merger in sd-scripts.
resized lora using this script, with a threshold of 2.95, shaving off 30-35 MBs and reducing overfitting:
FAQ
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.






