Creates videos in a modern HD anime style; useful for maintaining this style when a prompt or model has a tendency to insert realism or a 3d animation aesthetic.
The primary trigger is "An1meStyl3," but it is recommended to also include the tag "AnimeStyle" alongside it.
Version 1 Notes
WAN 2.2 appears to have a strong bias towards realistic generations, and training this LoRA with more than 2000 steps resulted in overfit and yielded a large amount of noise, even with 6 low-noise steps. As a result, more consistent generation can be achieved by bumping the model strength up to 1.2 or adding "((realistic))" to your negative prompt. In the event you are using one or more concept/motion/character LoRAs with a bias towards realism, use a stronger negative like "(((realistic))), ((photograph))"
This is intended to be a general-use anime LoRA, so it was trained from stills on a diverse set of shows that have a common general art style, rather than fixating on a particular show or creator, so that it doesn't closely resemble any particular illustrator's signature style or creative choices.
Version 2 Notes
From all tests, this version appears significantly more stable using the standard 4-step WAN 2.2 I2V workflow using any type of concept or motion LoRA without introducing unwanted noise or realism. The same negatives are recommended, but 4 steps with a strength of 1.0 should be sufficient.
Version Notes:
Version 1: Helps maintain style on characters; suffers quality loss when doing advanced motion or dramatic camera movement. Trained on an image set of images
Version 2: Helps maintain style on characters and background scene; more traditional character movement as seen in Japanese animation. Greater compatibility with concept and transition LoRAs without introducing style loss. Trained on a set of images and a set of video clips.
California AB 2013 Training Data Disclosure
This LoRA was fine-tuned using visual data consisting primarily of still images sourced from animated television series, along with a limited amount of publicly available fan-created renderings and AI-generated images. The training data includes copyrighted material owned by third parties, including animation studios, production committees, distributors, and individual artists. No training data was licensed or purchased. This LoRA is provided for non-commercial use only under the terms of its distribution.
The dataset for all versions consists of over 1,500 images collected from publicly accessible sources across more than a dozen animated series between approximately 2000 and 2025. Image data was processed through standard resizing, cropping, normalization, and labeling steps. Synthetic images were included as part of the training dataset.
A second version of this model additionally incorporates approximately 200–250 video clips sourced from more than half a dozen animated series. In that version, clips were used as video sequences to support motion and temporal consistency, and were processed and labeled for video training in addition to the image preprocessing described above.
This model is intended for non-commercial, experimental, and educational use. Generated outputs may reflect copyrighted visual styles or themes associated with the underlying training data. Users are responsible for ensuring compliance with applicable copyright law, other intellectual property laws, and all other applicable laws.
Description
FAQ
Comments (3)
From what I've tested, especially with low noise, the LoRA tends to change the internal details of the character's eyes.
Also, if you're trying to fix the model's tendency to turn 2D into 3D/realism video, it's really hard to fix. I'd recommend going for a 3D style (3d anime, cel shading, toon shader, or NPR shader) mixed with a bit of Live2D or 2D anime aesthetics. In my opinion, that works better than pure 2D anime for i2v.
Finally somebody going after animation style. This is still quite CGI-ish .. but certainly an improvement.
In my experience Wan will detect the style in the source image .. it will animate photo differently than anime frame. But with anime it's not doing typical japanese anime animation .. it's more like those chinese animations done with 2D parts being moved around, rather than the whole frame being redrawn.
I've noticed similar issues with some of the movement, and it also seems to get really high noise during certain types of camera movements as well. My hope is to create a V2 of both I2V and T2V that adds an additional set of training data based on clips with different types of character and camera movement as well. I've begun compiling a dataset, but I think I still have a way to go in order to get the concept diversity I'm shooting for; if everything goes according to plan, it is on the roadmap though!
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.