CivArchive
    Artius WAN Uncensored - v1.0
    NSFW
    Preview 93183451
    Preview 93183476
    Preview 93183452
    Preview 93183433
    Preview 93183453
    Preview 93183460
    Preview 93183455
    Preview 93183456
    Preview 93183457
    Preview 93183459
    Preview 93183463
    Preview 93183458
    Preview 93183467
    Preview 93183473
    Preview 93183475
    Preview 93202271
    Preview 93204403
    Preview 93187042

    A New Era in Image and Video Generation!

    Artius WAN is a lovingly crafted blend based on Wan 2.1 T2V 14b and a diverse collection of LoRAs, making it a versatile model with powerful NSFW capabilities.

    This model excels at generating highly detailed images (1920x1080) as well as videos from text prompts—all in just 5 steps!

    For optimal performance in ComfyUI, be sure to install this essential node:
    👉 https://github.com/ClownsharkBatwing/RES4LYF
    Also you need WAN VAE and Text Encoder: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged

    Special thanks to the creators of these LoRAs:

    Every generated image includes an embedded ComfyUI workflow, allowing you to open and test Artius WAN yourself. I’m confident you’ll love this model!

    Recommended sampler settings:

    • Steps: 5

    • Sampler: Heun

    • Scheduler: beta57 (via RES4LYF node)

    • Model Shift: From 1 to 2 (less is more realistic)

    If you'd like to support my work, consider joining me on Boosty or Patreon:
    Boosty
    Patreon

    Description

    FAQ

    Comments (37)

    alvamos354Aug 9, 2025· 6 reactions
    CivitAI

    Here's the google translation:

    Let them throw feces at me, but I believe this is currently best checkpoint merge Wan2.1. Even the issue of the same face here is compensated by the ability to change silhouette figures, which already represents significant progress. It seems that 8 steps res-multistep work better than 5 steps heun in ComfyUI, or maybe I've overdone it with the settings :)."

    sanchezvfx
    Author
    Aug 9, 2025· 3 reactions

    yes, res sampler is better but slower, i found that heun is balanced between quality and speed

    NawenSep 26, 2025· 1 reaction

    If they throw they're being petty, what you're saying is true, this is surprisingly good...

    Seeker360Aug 9, 2025· 5 reactions
    CivitAI

    Just produced my first video with this and I'm impressed at what it can do without needing to manually load in a bunch of LoRAs!

    Unfortunately, I'm on a 12GB VRAM 5070, and it took quite a bit longer than I'm used to for the generation to happen (480x838 @ 121 steps) - nothing extreme, just clearly all the merged loras etc demand quite a bit!

    Doesn't help that ClownSharkSampler only seems to have Heun 2s and Heun 3s rather than standard good ol' single step Heun (sounds like a Chinese ballroom dancer...)

    I'd be very interested to see if it's possible for this to be quantised as a GGUF for us poor VRAM peons!

    Very promising though - nice work!

    Seeker360Aug 9, 2025

    Also, this comes up in the terminal when running a generation - I don't have any LoRAs loaded, just the UNET, Text Encoder and VAE... is it all good or does it suggest an issue with the model?

    unet missing: ['text_embedding.0.scale_input', 'text_embedding.2.scale_input', 'time_embedding.0.scale_input', 'time_embedding.2.scale_input', 'time_projection.1.scale_input', 'blocks.0.self_attn.q.scale_input', 'blocks.0.self_attn.k.scale_input', 'blocks.0.self_attn.v.scale_input', 'blocks.0.self_attn.o.scale_input', 'blocks.0.cross_attn.q.scale_input', 'blocks.0.cross_attn.k.scale_input', 'blocks.0.cross_attn.v.scale_input', 'blocks.0.cross_attn.o.scale_input', 'blocks.0.ffn.0.scale_input', 'blocks.0.ffn.2.scale_input', 'blocks.1.self_attn.q.scale_input', 'blocks.1.self_attn.k.scale_input', 'blocks.1.self_attn.v.scale_input', 'blocks.1.self_attn.o.scale_input', 'blocks.1.cross_attn.q.scale_input', 'blocks.1.cross_attn.k.scale_input', 'blocks.1.cross_attn.v.scale_input', 'blocks.1.cross_attn.o.scale_input', 'blocks.1.ffn.0.scale_input', 'blocks.1.ffn.2.scale_input', 'blocks.2.self_attn.q.scale_input', 'blocks.2.self_attn.k.scale_input', 'blocks.2.self_attn.v.scale_input', 'blocks.2.self_attn.o.scale_input', 'blocks.2.cross_attn.q.scale_input', 'blocks.2.cross_attn.k.scale_input', 'blocks.2.cross_attn.v.scale_input', 'blocks.2.cross_attn.o.scale_input', 'blocks.2.ffn.0.scale_input', 'blocks.2.ffn.2.scale_input', 'blocks.3.self_attn.q.scale_input', 'blocks.3.self_attn.k.scale_input', 'blocks.3.self_attn.v.scale_input', 'blocks.3.self_attn.o.scale_input', 'blocks.3.cross_attn.q.scale_input', 'blocks.3.cross_attn.k.scale_input', 'blocks.3.cross_attn.v.scale_input', 'blocks.3.cross_attn.o.scale_input', 'blocks.3.ffn.0.scale_input', 'blocks.3.ffn.2.scale_input', 'blocks.4.self_attn.q.scale_input', 'blocks.4.self_attn.k.scale_input', 'blocks.4.self_attn.v.scale_input', 'blocks.4.self_attn.o.scale_input', 'blocks.4.cross_attn.q.scale_input', 'blocks.4.cross_attn.k.scale_input', 'blocks.4.cross_attn.v.scale_input', 'blocks.4.cross_attn.o.scale_input', 'blocks.4.ffn.0.scale_input', 'blocks.4.ffn.2.scale_input', 'blocks.5.self_attn.q.scale_input', 'blocks.5.self_attn.k.scale_input', 'blocks.5.self_attn.v.scale_input', 'blocks.5.self_attn.o.scale_input', 'blocks.5.cross_attn.q.scale_input', 'blocks.5.cross_attn.k.scale_input', 'blocks.5.cross_attn.v.scale_input', 'blocks.5.cross_attn.o.scale_input', 'blocks.5.ffn.0.scale_input', 'blocks.5.ffn.2.scale_input', 'blocks.6.self_attn.q.scale_input', 'blocks.6.self_attn.k.scale_input', 'blocks.6.self_attn.v.scale_input', 'blocks.6.self_attn.o.scale_input', 'blocks.6.cross_attn.q.scale_input', 'blocks.6.cross_attn.k.scale_input', 'blocks.6.cross_attn.v.scale_input', 'blocks.6.cross_attn.o.scale_input', 'blocks.6.ffn.0.scale_input', 'blocks.6.ffn.2.scale_input', 'blocks.7.self_attn.q.scale_input', 'blocks.7.self_attn.k.scale_input', 'blocks.7.self_attn.v.scale_input', 'blocks.7.self_attn.o.scale_input', 'blocks.7.cross_attn.q.scale_input', 'blocks.7.cross_attn.k.scale_input', 'blocks.7.cross_attn.v.scale_input', 'blocks.7.cross_attn.o.scale_input', 'blocks.7.ffn.0.scale_input', 'blocks.7.ffn.2.scale_input', 'blocks.8.self_attn.q.scale_input', 'blocks.8.self_attn.k.scale_input', 'blocks.8.self_attn.v.scale_input', 'blocks.8.self_attn.o.scale_input', 'blocks.8.cross_attn.q.scale_input', 'blocks.8.cross_attn.k.scale_input', 'blocks.8.cross_attn.v.scale_input', 'blocks.8.cross_attn.o.scale_input', 'blocks.8.ffn.0.scale_input', 'blocks.8.ffn.2.scale_input', 'blocks.9.self_attn.q.scale_input', 'blocks.9.self_attn.k.scale_input', 'blocks.9.self_attn.v.scale_input', 'blocks.9.self_attn.o.scale_input', 'blocks.9.cross_attn.q.scale_input', 'blocks.9.cross_attn.k.scale_input', 'blocks.9.cross_attn.v.scale_input', 'blocks.9.cross_attn.o.scale_input', 'blocks.9.ffn.0.scale_input', 'blocks.9.ffn.2.scale_input', 'blocks.10.self_attn.q.scale_input', 'blocks.10.self_attn.k.scale_input', 'blocks.10.self_attn.v.scale_input', 'blocks.10.self_attn.o.scale_input', 'blocks.10.cross_attn.q.scale_input', 'blocks.10.cross_attn.k.scale_input', 'blocks.10.cross_attn.v.scale_input', 'blocks.10.cross_attn.o.scale_input', 'blocks.10.ffn.0.scale_input', 'blocks.10.ffn.2.scale_input', 'blocks.11.self_attn.q.scale_input', 'blocks.11.self_attn.k.scale_input', 'blocks.11.self_attn.v.scale_input', 'blocks.11.self_attn.o.scale_input', 'blocks.11.cross_attn.q.scale_input', 'blocks.11.cross_attn.k.scale_input', 'blocks.11.cross_attn.v.scale_input', 'blocks.11.cross_attn.o.scale_input', 'blocks.11.ffn.0.scale_input', 'blocks.11.ffn.2.scale_input', 'blocks.12.self_attn.q.scale_input', 'blocks.12.self_attn.k.scale_input', 'blocks.12.self_attn.v.scale_input', 'blocks.12.self_attn.o.scale_input', 'blocks.12.cross_attn.q.scale_input', 'blocks.12.cross_attn.k.scale_input', 'blocks.12.cross_attn.v.scale_input', 'blocks.12.cross_attn.o.scale_input', 'blocks.12.ffn.0.scale_input', 'blocks.12.ffn.2.scale_input', 'blocks.13.self_attn.q.scale_input', 'blocks.13.self_attn.k.scale_input', 'blocks.13.self_attn.v.scale_input', 'blocks.13.self_attn.o.scale_input', 'blocks.13.cross_attn.q.scale_input', 'blocks.13.cross_attn.k.scale_input', 'blocks.13.cross_attn.v.scale_input', 'blocks.13.cross_attn.o.scale_input', 'blocks.13.ffn.0.scale_input', 'blocks.13.ffn.2.scale_input', 'blocks.14.self_attn.q.scale_input', 'blocks.14.self_attn.k.scale_input', 'blocks.14.self_attn.v.scale_input', 'blocks.14.self_attn.o.scale_input', 'blocks.14.cross_attn.q.scale_input', 'blocks.14.cross_attn.k.scale_input', 'blocks.14.cross_attn.v.scale_input', 'blocks.14.cross_attn.o.scale_input', 'blocks.14.ffn.0.scale_input', 'blocks.14.ffn.2.scale_input', 'blocks.15.self_attn.q.scale_input', 'blocks.15.self_attn.k.scale_input', 'blocks.15.self_attn.v.scale_input', 'blocks.15.self_attn.o.scale_input', 'blocks.15.cross_attn.q.scale_input', 'blocks.15.cross_attn.k.scale_input', 'blocks.15.cross_attn.v.scale_input', 'blocks.15.cross_attn.o.scale_input', 'blocks.15.ffn.0.scale_input', 'blocks.15.ffn.2.scale_input', 'blocks.16.self_attn.q.scale_input', 'blocks.16.self_attn.k.scale_input', 'blocks.16.self_attn.v.scale_input', 'blocks.16.self_attn.o.scale_input', 'blocks.16.cross_attn.q.scale_input', 'blocks.16.cross_attn.k.scale_input', 'blocks.16.cross_attn.v.scale_input', 'blocks.16.cross_attn.o.scale_input', 'blocks.16.ffn.0.scale_input', 'blocks.16.ffn.2.scale_input', 'blocks.17.self_attn.q.scale_input', 'blocks.17.self_attn.k.scale_input', 'blocks.17.self_attn.v.scale_input', 'blocks.17.self_attn.o.scale_input', 'blocks.17.cross_attn.q.scale_input', 'blocks.17.cross_attn.k.scale_input', 'blocks.17.cross_attn.v.scale_input', 'blocks.17.cross_attn.o.scale_input', 'blocks.17.ffn.0.scale_input', 'blocks.17.ffn.2.scale_input', 'blocks.18.self_attn.q.scale_input', 'blocks.18.self_attn.k.scale_input', 'blocks.18.self_attn.v.scale_input', 'blocks.18.self_attn.o.scale_input', 'blocks.18.cross_attn.q.scale_input', 'blocks.18.cross_attn.k.scale_input', 'blocks.18.cross_attn.v.scale_input', 'blocks.18.cross_attn.o.scale_input', 'blocks.18.ffn.0.scale_input', 'blocks.18.ffn.2.scale_input', 'blocks.19.self_attn.q.scale_input', 'blocks.19.self_attn.k.scale_input', 'blocks.19.self_attn.v.scale_input', 'blocks.19.self_attn.o.scale_input', 'blocks.19.cross_attn.q.scale_input', 'blocks.19.cross_attn.k.scale_input', 'blocks.19.cross_attn.v.scale_input', 'blocks.19.cross_attn.o.scale_input', 'blocks.19.ffn.0.scale_input', 'blocks.19.ffn.2.scale_input', 'blocks.20.self_attn.q.scale_input', 'blocks.20.self_attn.k.scale_input', 'blocks.20.self_attn.v.scale_input', 'blocks.20.self_attn.o.scale_input', 'blocks.20.cross_attn.q.scale_input', 'blocks.20.cross_attn.k.scale_input', 'blocks.20.cross_attn.v.scale_input', 'blocks.20.cross_attn.o.scale_input', 'blocks.20.ffn.0.scale_input', 'blocks.20.ffn.2.scale_input', 'blocks.21.self_attn.q.scale_input', 'blocks.21.self_attn.k.scale_input', 'blocks.21.self_attn.v.scale_input', 'blocks.21.self_attn.o.scale_input', 'blocks.21.cross_attn.q.scale_input', 'blocks.21.cross_attn.k.scale_input', 'blocks.21.cross_attn.v.scale_input', 'blocks.21.cross_attn.o.scale_input', 'blocks.21.ffn.0.scale_input', 'blocks.21.ffn.2.scale_input', 'blocks.22.self_attn.q.scale_input', 'blocks.22.self_attn.k.scale_input', 'blocks.22.self_attn.v.scale_input', 'blocks.22.self_attn.o.scale_input', 'blocks.22.cross_attn.q.scale_input', 'blocks.22.cross_attn.k.scale_input', 'blocks.22.cross_attn.v.scale_input', 'blocks.22.cross_attn.o.scale_input', 'blocks.22.ffn.0.scale_input', 'blocks.22.ffn.2.scale_input', 'blocks.23.self_attn.q.scale_input', 'blocks.23.self_attn.k.scale_input', 'blocks.23.self_attn.v.scale_input', 'blocks.23.self_attn.o.scale_input', 'blocks.23.cross_attn.q.scale_input', 'blocks.23.cross_attn.k.scale_input', 'blocks.23.cross_attn.v.scale_input', 'blocks.23.cross_attn.o.scale_input', 'blocks.23.ffn.0.scale_input', 'blocks.23.ffn.2.scale_input', 'blocks.24.self_attn.q.scale_input', 'blocks.24.self_attn.k.scale_input', 'blocks.24.self_attn.v.scale_input', 'blocks.24.self_attn.o.scale_input', 'blocks.24.cross_attn.q.scale_input', 'blocks.24.cross_attn.k.scale_input', 'blocks.24.cross_attn.v.scale_input', 'blocks.24.cross_attn.o.scale_input', 'blocks.24.ffn.0.scale_input', 'blocks.24.ffn.2.scale_input', 'blocks.25.self_attn.q.scale_input', 'blocks.25.self_attn.k.scale_input', 'blocks.25.self_attn.v.scale_input', 'blocks.25.self_attn.o.scale_input', 'blocks.25.cross_attn.q.scale_input', 'blocks.25.cross_attn.k.scale_input', 'blocks.25.cross_attn.v.scale_input', 'blocks.25.cross_attn.o.scale_input', 'blocks.25.ffn.0.scale_input', 'blocks.25.ffn.2.scale_input', 'blocks.26.self_attn.q.scale_input', 'blocks.26.self_attn.k.scale_input', 'blocks.26.self_attn.v.scale_input', 'blocks.26.self_attn.o.scale_input', 'blocks.26.cross_attn.q.scale_input', 'blocks.26.cross_attn.k.scale_input', 'blocks.26.cross_attn.v.scale_input', 'blocks.26.cross_attn.o.scale_input', 'blocks.26.ffn.0.scale_input', 'blocks.26.ffn.2.scale_input', 'blocks.27.self_attn.q.scale_input', 'blocks.27.self_attn.k.scale_input', 'blocks.27.self_attn.v.scale_input', 'blocks.27.self_attn.o.scale_input', 'blocks.27.cross_attn.q.scale_input', 'blocks.27.cross_attn.k.scale_input', 'blocks.27.cross_attn.v.scale_input', 'blocks.27.cross_attn.o.scale_input', 'blocks.27.ffn.0.scale_input', 'blocks.27.ffn.2.scale_input', 'blocks.28.self_attn.q.scale_input', 'blocks.28.self_attn.k.scale_input', 'blocks.28.self_attn.v.scale_input', 'blocks.28.self_attn.o.scale_input', 'blocks.28.cross_attn.q.scale_input', 'blocks.28.cross_attn.k.scale_input', 'blocks.28.cross_attn.v.scale_input', 'blocks.28.cross_attn.o.scale_input', 'blocks.28.ffn.0.scale_input', 'blocks.28.ffn.2.scale_input', 'blocks.29.self_attn.q.scale_input', 'blocks.29.self_attn.k.scale_input', 'blocks.29.self_attn.v.scale_input', 'blocks.29.self_attn.o.scale_input', 'blocks.29.cross_attn.q.scale_input', 'blocks.29.cross_attn.k.scale_input', 'blocks.29.cross_attn.v.scale_input', 'blocks.29.cross_attn.o.scale_input', 'blocks.29.ffn.0.scale_input', 'blocks.29.ffn.2.scale_input', 'blocks.30.self_attn.q.scale_input', 'blocks.30.self_attn.k.scale_input', 'blocks.30.self_attn.v.scale_input', 'blocks.30.self_attn.o.scale_input', 'blocks.30.cross_attn.q.scale_input', 'blocks.30.cross_attn.k.scale_input', 'blocks.30.cross_attn.v.scale_input', 'blocks.30.cross_attn.o.scale_input', 'blocks.30.ffn.0.scale_input', 'blocks.30.ffn.2.scale_input', 'blocks.31.self_attn.q.scale_input', 'blocks.31.self_attn.k.scale_input', 'blocks.31.self_attn.v.scale_input', 'blocks.31.self_attn.o.scale_input', 'blocks.31.cross_attn.q.scale_input', 'blocks.31.cross_attn.k.scale_input', 'blocks.31.cross_attn.v.scale_input', 'blocks.31.cross_attn.o.scale_input', 'blocks.31.ffn.0.scale_input', 'blocks.31.ffn.2.scale_input', 'blocks.32.self_attn.q.scale_input', 'blocks.32.self_attn.k.scale_input', 'blocks.32.self_attn.v.scale_input', 'blocks.32.self_attn.o.scale_input', 'blocks.32.cross_attn.q.scale_input', 'blocks.32.cross_attn.k.scale_input', 'blocks.32.cross_attn.v.scale_input', 'blocks.32.cross_attn.o.scale_input', 'blocks.32.ffn.0.scale_input', 'blocks.32.ffn.2.scale_input', 'blocks.33.self_attn.q.scale_input', 'blocks.33.self_attn.k.scale_input', 'blocks.33.self_attn.v.scale_input', 'blocks.33.self_attn.o.scale_input', 'blocks.33.cross_attn.q.scale_input', 'blocks.33.cross_attn.k.scale_input', 'blocks.33.cross_attn.v.scale_input', 'blocks.33.cross_attn.o.scale_input', 'blocks.33.ffn.0.scale_input', 'blocks.33.ffn.2.scale_input', 'blocks.34.self_attn.q.scale_input', 'blocks.34.self_attn.k.scale_input', 'blocks.34.self_attn.v.scale_input', 'blocks.34.self_attn.o.scale_input', 'blocks.34.cross_attn.q.scale_input', 'blocks.34.cross_attn.k.scale_input', 'blocks.34.cross_attn.v.scale_input', 'blocks.34.cross_attn.o.scale_input', 'blocks.34.ffn.0.scale_input', 'blocks.34.ffn.2.scale_input', 'blocks.35.self_attn.q.scale_input', 'blocks.35.self_attn.k.scale_input', 'blocks.35.self_attn.v.scale_input', 'blocks.35.self_attn.o.scale_input', 'blocks.35.cross_attn.q.scale_input', 'blocks.35.cross_attn.k.scale_input', 'blocks.35.cross_attn.v.scale_input', 'blocks.35.cross_attn.o.scale_input', 'blocks.35.ffn.0.scale_input', 'blocks.35.ffn.2.scale_input', 'blocks.36.self_attn.q.scale_input', 'blocks.36.self_attn.k.scale_input', 'blocks.36.self_attn.v.scale_input', 'blocks.36.self_attn.o.scale_input', 'blocks.36.cross_attn.q.scale_input', 'blocks.36.cross_attn.k.scale_input', 'blocks.36.cross_attn.v.scale_input', 'blocks.36.cross_attn.o.scale_input', 'blocks.36.ffn.0.scale_input', 'blocks.36.ffn.2.scale_input', 'blocks.37.self_attn.q.scale_input', 'blocks.37.self_attn.k.scale_input', 'blocks.37.self_attn.v.scale_input', 'blocks.37.self_attn.o.scale_input', 'blocks.37.cross_attn.q.scale_input', 'blocks.37.cross_attn.k.scale_input', 'blocks.37.cross_attn.v.scale_input', 'blocks.37.cross_attn.o.scale_input', 'blocks.37.ffn.0.scale_input', 'blocks.37.ffn.2.scale_input', 'blocks.38.self_attn.q.scale_input', 'blocks.38.self_attn.k.scale_input', 'blocks.38.self_attn.v.scale_input', 'blocks.38.self_attn.o.scale_input', 'blocks.38.cross_attn.q.scale_input', 'blocks.38.cross_attn.k.scale_input', 'blocks.38.cross_attn.v.scale_input', 'blocks.38.cross_attn.o.scale_input', 'blocks.38.ffn.0.scale_input', 'blocks.38.ffn.2.scale_input', 'blocks.39.self_attn.q.scale_input', 'blocks.39.self_attn.k.scale_input', 'blocks.39.self_attn.v.scale_input', 'blocks.39.self_attn.o.scale_input', 'blocks.39.cross_attn.q.scale_input', 'blocks.39.cross_attn.k.scale_input', 'blocks.39.cross_attn.v.scale_input', 'blocks.39.cross_attn.o.scale_input', 'blocks.39.ffn.0.scale_input', 'blocks.39.ffn.2.scale_input', 'head.head.scale_input']

    Seeker360Aug 9, 2025· 2 reactions

    Having just made my second video (using res2s/beta57 as advised in another comment, 6 steps, CFG 2), I have to say... I'm blown away by this!

    I've tried a fair few WAN 2.1 models and merges, and hundreds of LoRAs, and the result I just got is better and more prompt adherent than anything I've achieved with those (or WAN 2.2, dare I say it)...

    If I had a hat, I'd be doffing it right now. Excellent work!

    sanchezvfx
    Author
    Aug 9, 2025

    Seeker360 this terminal unet missing messages not affected final result, but I will fix this in next version

    Seeker360Aug 9, 2025· 1 reaction

    The actual video quality is amazing - even at 480p, I'm getting generations that are better quality than my 720p WAN 2.2 runs.... Maybe I should have been using Heun all this time!

    Prompt adherence is a bit patchy. Simple prompts it does really well with, but anything slightly less simple and it falls apart ... Not sure if that's something to do with the balance of Lora Weights within the merge?

    As a first draft though, this is definitely very impressive - can't wait to see the next version!

    sanchezvfx
    Author
    Aug 9, 2025

    Thank you friends for the feedback, I myself enjoy the result, and I am glad that you liked it

    Seeker360Aug 10, 2025· 2 reactions

    sanchezvfx Seriously, your checkpoint is that good, I'm contemplating whether I even need all my other WAN 2.1 and 2.2 models and my huge folders of loras... It is literally better than WAN 2.2 ... You should be extremely proud of it!

    vmarko70788Aug 9, 2025· 9 reactions
    CivitAI

    Thank You! Good job and thanks for sharing it. Any possibility to have it in Image to video version?

    _RUST_Aug 9, 2025

    Yes. And the author has examples

    sanchezvfx
    Author
    Aug 9, 2025· 2 reactions

    Image to video not so important if you have top image, but I2V is next :)

    vmarko70788Aug 10, 2025

    sanchezvfx That's right, but sometimes I start from a real photo or try to make more videos of the same imaginary character. Anyway I'm eagerly waiting for I2V, thanks a lot!

    Seeker360Aug 11, 2025

    After being so impressed with the T2V, I too can't wait for an I2V if it works anywhere near as well!

    _RUST_Aug 9, 2025· 7 reactions
    CivitAI

    Great model. Will there be an i2v model?

    sanchezvfx
    Author
    Aug 9, 2025· 8 reactions

    Image to video not so important if you have top image, but I2V is next :)

    ravenerkr841Aug 10, 2025· 10 reactions
    CivitAI

    Wow! Checkpoint Merge of wan? really? you are the legend man!

    2legsRises357Aug 10, 2025· 2 reactions
    CivitAI

    very good and pretty fast. great work. quick question please, if i want to add extra loras do i add wan high or wan low?

    sanchezvfx
    Author
    Aug 10, 2025· 1 reaction

    this is wan 2.1 based model, so you need wan 2.1 loras

    2legsRises357Aug 11, 2025· 1 reaction

    thank you, works so well.

    KaddacAug 11, 2025· 6 reactions
    CivitAI

    Amazing and mind-blowing. Thanks for making it, thanks for sharing it.
    Would it be possible to upload a diffusers folder to huggingface or another platform, like this one for the base model? https://huggingface.co/Wan-AI/Wan2.1-T2V-14B-Diffusers/tree/main/transformer

    One of the only drawbacks I see is that my LoRas trained on the base model seem to work less with yours (characters, clothing). I would like to retrain them directly on the model.

    fluxxesFeb 18, 2026

    is that method working with wan? I did exactly what you are trying to do with fine tuned flux models back in the days. output was bad. I came to conclusion it is best to train a lora with official base model.

    Griphen116Aug 13, 2025
    CivitAI

    actually great model.
    Any chance to get an I2V model?

    sanchezvfx
    Author
    Aug 13, 2025· 8 reactions

    yes

    Griphen116Aug 29, 2025· 3 reactions

    @sanchezvfx any progress towards I2V with this?

    Still getting better T2V results with this than other models.

    STRWHERENov 25, 2025

    +1 for i2v

    EnragedAntelopeAug 14, 2025· 4 reactions
    CivitAI

    Hi, thanks for making this. Do you happen to be planning a WAN2.2 update? I would love all the kijair/lightxv/moviigen loras wrapped in to one

    BetterPornAug 21, 2025· 5 reactions
    CivitAI

    Great work on this, I'm glad you liked the NSFW API model. I'm assuming you used the NSFW Wan 2.1 14b checkpoint as part of the base for this?

    sanchezvfx
    Author
    Aug 21, 2025· 2 reactions

    Thanks! Not a model but your wonderful lora is part of my mix

    Seeker360Sep 4, 2025· 4 reactions
    CivitAI

    I thought I'd see how your checkpoint does at generating I2T using a single frame generation. And I was blown away especially for NSFW stuff. It literally blows everything else out of the water. I could quite happily uninstall pretty much every other SDXL, Flux, Qwen model... It's outstanding. I'm excited to see where this checkpoint goes next as well as what we can expect from an I2V version

    sanchezvfx
    Author
    Sep 5, 2025· 4 reactions

    Hi! V2 will be better!

    Seeker360Sep 6, 2025

    @sanchezvfx Can't wait my friend! 😁

    _RUST_Sep 25, 2025
    CivitAI

    Hi. I have a human lora that I made on wan 2.2 in high and low noise. Can I use it with your model?

    sanchezvfx
    Author
    Sep 25, 2025

    You can try low noise

    tomasf878854Nov 3, 2025
    CivitAI

    Hello, is there a way, how to make long videos from continuous clips with this model?

    rubensfredDec 23, 2025· 2 reactions
    CivitAI

    Incredible work on this model! I've been using it for 3 days and I'm impressed!

    Looking forward to an I2V like this, haha!

    Thanks for sharing with us Sanchez !

    Checkpoint
    Wan Video 14B t2v

    Details

    Downloads
    3,564
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/8/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.