CivArchive

    Generates WAN 2.1 videos in a fraction of time.

    720p and 480p Version

    • Sampler/Scheduler: Euler/Simple

    • Steps: 4

    • CFG: 1

    • Sigma-Shift 5

    Original Model from Lightx2v converted to FP8 quantisation.

    ☠️ Do not use any extra speed-up tricks or LoRAs or it may mess up your generations ... 🤬

    ⚠️ Hint: Most of the time the model is taking you by word. If you write "white" it is white. "Translucent" is translucent... like for the fluids. 💦 Now you know! 🫵 translucent whitish 🤫

    ⬇️⬇️⬇️⬇️⬇️⬇️⬇️⬇️⬇️⬇️⬇️

    8 GB VRAM, 32 GB RAM

    Sample times: <2 minutes for 81 frames, 4 steps on RTX 4070 Ti Super.

    Compatible with 14B LoRAs.

    I normally use 0-2 LoRAs, strength at 0.4-1 depending on how much the effect should be. 0.7-0.9 works best most of the time, not overwriting the style of an image.
    At multiple LoRAs is seems best to tune the strength a bit down to 0.3-0.6.

    Basic workflow example:

    Here: https://civarchive.com/models/1811161?modelVersionId=2049602

    My favourite UI:

    SwarmUI https://github.com/mcmonkeyprojects/SwarmUI

    Testing (my specs):

    I can go wild on setting with this full checkpoint, even with added LoRAs:

    • 121 frames possible: ~ 3 minutes

    • 121 frames on 24 fps possible (more motion): ~ 3 minutes

    • 128 frames on 24 fps possible (more motion and extended): ~ 3.5 minutes

    Dependencies:


    YOU are responsible for outputs as always! If you make ToS violating content and I get aware I WILL report this.


    Disclaimer

    This models are shared without warranties and with the condition that it is used in a lawful and responsible way. I do not support or take responsibility for illegal, harmful, or harassing uses. By downloading or using it, you accept that you are solely responsible for how it is used.

    Description

    FAQ

    Comments (95)

    haidensd58757Aug 7, 2025· 1 reaction
    CivitAI

    Can i use atleast two loras. its not speed up things but its to fix movements and textures. Will it affect performance?

    darksidewalker
    Author
    Aug 7, 2025· 3 reactions

    You can use any LoRA, just not other speed LoRAs. I used up to 3 for poses/actions and so on...

    haidensd58757Aug 7, 2025· 2 reactions

    Alright that's good news brother. What about your Wan 2.2 Lightning LoRAs, since you said "Do not use any extra speed-up loras.." Will it conflict with this Checkpoint i2v Wan 2.1?

    decoystob2Aug 7, 2025· 1 reaction
    CivitAI

    This is amazing, it's so quick.
    May I ask, do you have any tips on camera movement? It seems to ignore a lot of the movement commands other wan2 checkpoints are ok with. Do you have any camera movement tips?

    darksidewalker
    Author
    Aug 8, 2025· 2 reactions

    Be more descriptive, I assume. All the speed-up LoRAs and tricks have a slight impact on prompt adherence.

    decoystob2Aug 8, 2025· 1 reaction

    darksidewalker Yep. That was the problem! Thanks :D

    7093904Aug 13, 2025

    Wan 2.1 based checkpoints will be very hit or miss for camera movement without help from custom nodes and more elaborate setups. Even with perfect descriptions you may just have to try a different seed and roll the dice. They didn't train it as thorough as the 2.2 14B, and even that is really more 2.1.5

    reed00112770Aug 8, 2025
    CivitAI

    Cant get this to work at all. I have 12gb vram .. but always fails on model load using the provided workflow in comfyui. I've tried a lot of things. Help would be appreciated! Tried with the model in diffusion models using the right dtype. Which model is best for 12gb vram?

    darksidewalker
    Author
    Aug 8, 2025· 1 reaction

    With comfyui I may not help much. Sorry. Maybe some other?

    reed00112770Aug 8, 2025· 2 reactions

    Its ok i spent a lot of time time messing around with chatgpt and comfyui. Eventually i have managed to get the small model to work. Thanks dude.

    datlurkaaAug 9, 2025· 2 reactions

    reed00112770 yo can you share that model? im in the same boat with a 12gb 4070

    InvictusAIAug 14, 2025

    reed00112770 ComfyUI just straight up crashes for me as well when loading this model. Mind sharing how you fixed it?

    a2354654513201Aug 12, 2025
    CivitAI

    Do you have first last frame to video?i used first last frame to video node and video last frame turn yellow

    Saratoga77Aug 14, 2025
    CivitAI

    Unfortunately this one doesn't work for me. Probably because i'm the rare guy on a Mac M1 silicon using DT. I set it exactly to every setting you listed but it crashes while it's doing the initial "processing" stage. I WISH i could speed up my clips

    Saratoga77Aug 14, 2025· 1 reaction

    i figured it out. "Draw Things" just did NOT like the LCM sampler. I switched to DDIM Trailing and now i get way better quality at quadruple the speed. Saves so much time. This is awesome. Thank you

    ggw19880307Aug 14, 2025
    CivitAI

    这个会有动作损失吗?我的意思是这个模型是否比原来的模型动作幅度更小

    WildCentaurAug 14, 2025
    CivitAI

    works like charm, hope you make lightspeed for wan 2.2 either

    Saratoga77Aug 15, 2025· 2 reactions
    CivitAI

    This thing is like the invention of the Microwave! ⭐️⭐️⭐️⭐️⭐️

    LoeeeAug 15, 2025
    CivitAI

    This is really strange. A few days ago, I was able to run it successfully and it worked quite well. But now, just a few days later, I can't even load the model—once I try, it causes my ComfyUI command line to freeze or pause!

    LoeeeAug 15, 2025

    I tried reinstalling ComfyUI and all the dependencies… but it still doesn’t work. I’m not sure if ComfyUI got updated or something… this is such a mess.

    darksidewalker
    Author
    Aug 15, 2025

    Loeee I may not help with ComfyUI, I use SwarmUI, but SwarmUI uses comfyui as backend and all is working fine. The only thing is SwarmUI just automates so much like choosing the right nodes and downloads missing models vae/clip...) and checking compatibility of ComfyUI. My ComfyUI backend is on 0.3.50 and it works fine.

    LoeeeAug 15, 2025

    darksidewalker I increased the virtual memory allocated by Windows, and then it started working again. My RTX 4070 with 32GB RAM kept running smoothly! I didn’t even know why I did it—but somehow, it worked!

    darksidewalker
    Author
    Aug 15, 2025

    Loeee Some other Windows user stated, that he need a significant amount of pagefile, maybe there is a bug inside windows that consumes huge amounts of RAM/Pagefile. Thats not a thing under my OS. So I may not help here.

    InvictusAIAug 15, 2025
    CivitAI

    Finally got it to run on ComfyUI, and while this is probably the best Wan2.1 model out there, quality wise, it's far from optimized. Also while the list might imply that you shouldn't do any "speed-up" tricks, you absolutely have to install at least Triton, it'll double your speed. (Tried it on a "clean" ComfyUI and got 40sec/it, with a fully decked out install - triton, sage, etc. - I got around 22sec/it. Same Workflow)

    The main crux of this model, and why it's a hard sell, is the ridiculous need for up to 80GB RAM when loading. This makes the time to first inference around 20+ minutes and leagues behind loading both Wan 2.2 high and low noise models, while still being at roughly Wan 2.1 quality.

    So for those who get a crash when loading this model in ComfyUI: Make sure that your system RAM + Pagefile have at least ~90GB total. (So 32GB RAM + 58GB pagefile, 48GB RAM + 48GB Pagefile etc.)

    darksidewalker
    Author
    Aug 15, 2025

    Just for comparison on my setup (SwarmUI/ComfyUI backend 0.3.50) @720p: Consuming 24 GB RAM, 4 GB (Swap/Pagefile), 13 GB VRAM. Thats not even close to 90 GB like yours. That's heavy!

    InvictusAIAug 16, 2025

    darksidewalker Yea once fully loaded it'll drop down to occupying 40GB ram, during loading it'll peak above 80 and if the page file is smaller than that Comfy will just crash. (Also updated my backend to 0.3.50 and tried multiple loader nodes, all with the same results) Doesn't happen with any other model (Tried Wan2.1 fp16, fp8, gguf Q5 and Wan2.2 fp8 scaled and gguf Q5, has no trouble model swapping both Wan 2.2 models) Tried with both Torch 2.8+CU128 and CU129. I presume the two other people commenting that it crashes also run into a similar page file issue. This model seems very sensitive to some particular software/hardware configurations.

    Good job in general, love the output quality, but to the edge cases like myself Wan2.2 is 66% faster.

    darksidewalker
    Author
    Aug 16, 2025

    InvictusAI Thats weird. And Wan 2.2 runs 2-3x slower for me as this.

    vandragonAug 16, 2025· 1 reaction
    CivitAI

    it doesnt seem to work when it gets to the defusion model it stops and says reconecting at the top right then when i press run again it says TypeError: Failed to fetch

    I'm using a 3050 rtx 8gb of vram

    DrewPWayngAug 17, 2025· 5 reactions
    CivitAI

    Worked immediately for me and super fast too. I noticed a lot of comments saying it isn't working but If you're having issues then either you have your files in the wrong folders or your hardware can't handle it.

    Furfur34Aug 19, 2025· 1 reaction
    CivitAI

    I cant keep the characters mouth closed. Everytime they are always starting to talk

    felixxauk110Oct 6, 2025· 2 reactions

    that's what marriage feels like i suppose

    xemmemAug 21, 2025· 6 reactions
    CivitAI

    3060 12 GB. Now I can experience what it feels like using Wan 2.1. I always get useless results with Teacache and gguf, and I had accepted this as my life until now.

    This thing is triple as fast and gave me results that I've never seen. Thank you.

    catwalk_supermodelAug 23, 2025
    CivitAI

    160 frame vid, 5060 8GB RAM, renders from a half hour down to a few minutes. LCM magic done right. With some nsfw renders, it behaves like the model was trained with Japanese filter requirements. Legit renders, but weird fun wacky censoring over female crotches materialize. Dunno about that part otherwise yea, minutes on eight gigglebytes? Wow.

    darksidewalker
    Author
    Aug 23, 2025

    For nsfw content you need trained LoRAs for this. WAN is not trained for that like most basic models from companies.

    catwalk_supermodelAug 24, 2025

    @darksidewalker Yea that's no surprise. I'm noting the anomaly because I am using lora's, prompt magic, tweaking cfg/strength/steps/etc, yet the force is strong in this one. Overall I'm seeing a number of very strange renders reminiscent of gen1 SD1.0+. Tweaking cfg in increments of 0.1 from 0.5-2.0 does alter the composition (Lightspeed is a really sensitive model), gone up to 10 so far and it grinds its paintbrush in expected ways. Yet still, the artifacts are weird. I've had to negative 'blood' and 'gore' due to said weirdness as an example.

    darksidewalker
    Author
    Aug 24, 2025

    @catwalk_supermodel lightning/destilled models are meat for CFG 1, others do not work as expected. You could change the sampler+scheduler. But stay at CFG 1. Also negatives do not work if CFG is 1, negatives are disabled this way.

    catwalk_supermodelAug 30, 2025

    @darksidewalker I've done enough testing now to confirm the problem was two-fold. 1. Must use weight fp8_e4m3fn_fast, 2. keep the lora strengths ~0.1 +/- ~0.1. Lightspeed is amazing.

    tbsmsksAug 29, 2025
    CivitAI

    It works better than any quant and is indeed as fast

    capchilla188Aug 30, 2025
    CivitAI

    phenomenal... blazing speed, high quality, and none of the range of motion issues from typical lightning accelerators. Thank you!

    Saratoga77Sep 1, 2025· 4 reactions
    CivitAI

    I love this. Will there be a WAN 2.2 Model??

    darksidewalker
    Author
    Sep 1, 2025· 2 reactions

    Maybe, but for now there are not quite the merges/loras to do one. Wan 2.2 is really new.

    Saratoga77Sep 1, 2025

    @darksidewalker yeah i havent even tried 2.2 yet. This Lightspeed has totally changed the game for me. I dont know how you did it but its wonderful

    darksidewalker
    Author
    Sep 1, 2025· 1 reaction

    @Saratoga77 Thx, but it is not my work, I used lightx2v model and made a merged quant for easy usage. But glad it is useful for you!

    EbonFantasySep 4, 2025· 5 reactions
    CivitAI

    NO WAY.... ITS SO FAST... UNREAL!!! and the quality is insane!!!

    vopiri8504329Sep 14, 2025
    CivitAI

    At cfg=1 the prompt adherence is really weak. Anyway its a good model because its so fast, so I'll stick with it for the moment. Thank you!

    alternative_UniverseSep 16, 2025
    CivitAI

    I don't what type of magic you did but this is awesome,thank you!

    darksidewalker
    Author
    Sep 16, 2025

    Thanks, but the most awesome stuff did lighx2v, I just made a quant/merge for easy usage:)

    @darksidewalker still, nobody else did it so i really apreaciate it, sadly , seems like some loras from wan 2.2 seems to not work very well but overall a awesome job, speaking of wich, do you think you could do the same thing to 2.2 for a lightspeed version?, that would be a game changer

    darksidewalker
    Author
    Sep 16, 2025

    @alternative_Universe I really would like to do it, but atm it is tricky because of the split checkpoint wan 2.2 is. Plus the distilled versions are far from good atm, usable at best.

    @darksidewalker damn, seems like I will stick around 2.1 for a while, thanks tho

    MrfenderaiSep 27, 2025
    CivitAI

    sirve para 8gb? que alguien me ayude

    EbonFantasySep 27, 2025

    yes it does. Im running it on 8gb and making videos in less than 3 minutes.

    MrfenderaiOct 1, 2025

    @EbonFantasy como lo hago amigo? cual es el workflow

    EbonFantasyOct 1, 2025· 1 reaction

    @Mrfenderai https://civitai.com/images/99275606 you can download this video and drag it into your comfyUI. If you are using ComfyUI

    9680263Oct 6, 2025
    CivitAI

    is there a wan 2.2 version of this checkpoint?

    darksidewalker
    Author
    Oct 6, 2025· 1 reaction

    Not exactly the same atm, but my first attempts to create something similar: https://civitai.com/models/1981116

    9680263Oct 6, 2025

    @darksidewalker Cool thanks for the rapid response.

    DinglDustrDec 5, 2025

    Sorry to piggyback off of your comment, but can someone briefly explain to me the difference between the 'WAN 2.2 14B' models and the 'Wan Video 14B i2v 720p' models? Feel free to link a resource if you have something, I can't find info on it. Thanks!

    darksidewalker
    Author
    Dec 5, 2025

    This is the successor of WAN 2.1 and WAN 2.2 FP8, the WAN 2.2 GGUF (FP16 base):
    https://civitai.com/models/2190659
    WAN 2.2 is always optimised up to 720p.
    The old WAN 2.1 models had two versions one for 480p and one for 720p optimised.

    DinglDustrDec 5, 2025

    @darksidewalker Okay, thank you. I didn't mean to post two comments earlier. This one wasn't showing up after I posted it, so I posted a new one. I'm assuming I should keep using the WAN 2.2 14B Models, then.

    darksidewalker
    Author
    Dec 5, 2025

    @dingledust definitely if you can. WAN 2.2 is absolutely superior to WAN 2.1 🤝

    pogid97357345Oct 13, 2025· 3 reactions
    CivitAI

    what is the difference between 720p and 480p models?

    i was using this 480p model and was having a blast with it, today i wanted to try out the 720p one but everything, from the model size to output quality seem pretty much the same.

    darksidewalker
    Author
    Oct 13, 2025· 2 reactions

    One is optimised for 480p the other for 720p, the 720p can do 480p too. If you use a low resolution both will do the same.

    pogid97357345Oct 13, 2025· 1 reaction

    @darksidewalker thanks for the quick reply, and a great merge as well. i usually do square inputs with 640x640 pixels, which one should i use? the generation speed is identical but maybe i couldn't see the small details.

    darksidewalker
    Author
    Oct 13, 2025· 2 reactions

    @pogid97357345 640x640 is low res, the 480p is great. If you go like 960x960 the 720p would be better.

    pogid97357345Oct 13, 2025· 1 reaction

    @darksidewalker awesome, man, thank you very much!

    superfly5000Nov 1, 2025· 1 reaction
    CivitAI

    Does this work with Wan2GP? I have it in the ckpts folder but it is not being recognized. have restarted, etc.. no dice..

    darksidewalker
    Author
    Nov 1, 2025

    Don't know. I don't use this backend.

    karlson1337594Feb 25, 2026

    Same problem here.

    hakaanNov 3, 2025
    CivitAI

    I have tested 3 steps, it give exactly the same result than 4 steps

    tobycortesJan 19, 2026

    remember that the more steps the better it will process whats prompted, including loras and textual inversions, sooo, if you stack loras you need more steps for it to process, for me i get really high quality at 10-20 steps !!!!

    proxybase42931Nov 15, 2025
    CivitAI

    Wow. Mr RTX 4070 took 1200 seconds for a 5 second video at 480P. it improved litteral 10 fold as it takes 120 seconds now.

    only thing that is noteworthy is the ai understanding of physics is more often prone to... garry's mod physics. Could be my setup. but i commend you none theless.

    darksidewalker
    Author
    Nov 15, 2025

    Hi! If you want the good physics, you have to go to WAN 2.2 :)

    DinglDustrDec 5, 2025
    CivitAI

    Can someone briefly explain the difference between the 'Wan Video 14B i2v 720p' models and the 'WAN 2.2 14B' models? (Not dasiwa specifically). Feel free to point me to a resource for me to learn about them, I'm unable to find anything on it. Thanks!

    darksidewalker
    Author
    Dec 5, 2025· 1 reaction

    You confusing WAN 2.1 with 2.2, but if you want to know more about WAN 2.2 here is some guide: https://civitai.com/articles/20293/darksidewalkers-wan-22-14b-i2v-usage-guide-definitive-edition

    DinglDustrDec 5, 2025

    @darksidewalker So the models labeled 'Wan Video 14B i2v 720p' on CivitAI are WAN 2.1 models then? I think that's what was confusing me, that it doesn't have 2.1 in the label. So I'll keep using the 2.2 models then. Thank you.

    DinglDustrDec 5, 2025

    @darksidewalker And thank you for this guide. It looks like this is what I need.

    darksidewalker
    Author
    Dec 5, 2025

    @dingledust The headline has the WAN 2.1 label, the model names can not be any longer, maybe I should change them

    RathmaDec 24, 2025
    CivitAI

    Thanks, I love it.

    I'm using 480p (480/832) but I'd like to get a little better quality out of it.

    So I'm setting it to 576/1024 (like in FusionX), but it doesn't significantly improve the quality.

    I have a question.

    Is it better to use 480p and set a higher resolution than the standard, or 720p and set a lower resolution than the standard (720/1280).

    I have 8GB of VRAM, so 576/1024 is my sweet spot.

    darksidewalker
    Author
    Dec 24, 2025· 1 reaction

    If you want better quality you may use WAN 2.2 instead of WAN 2.1
    But if you want to use WAN 2.1, than 720p and set a lower resolution.

    RathmaDec 24, 2025

    @darksidewalker Wan 2.2 with 8gb vram, is it possible?

    darksidewalker
    Author
    Dec 24, 2025· 1 reaction

    @Rathma With a lower quant or enough RAM.

    panitobackJan 8, 2026
    CivitAI

    i guys, I have a problem using WAN 2.1 14B, This is my workflo: https://civitai.com/models/1802623?modelVersionId=2086227 , my problem is that its not generating any video, the final output is just a black square. I already tried debugging with gemini and chatgpt but was completely useless, could someone help me?

    g1263495582Jan 11, 2026

    Update comfy.

    fadedninnaJan 21, 2026
    CivitAI

    In CMD screen I saw this "WARNING: No VAE weights detected, VAE not initalized." Is this a problem? I'm using your Workflow.

    darksidewalker
    Author
    Jan 21, 2026

    Well, you need to use a VAE.

    fadedninnaJan 21, 2026

    Yes, i am using the vae. I think my gpu is weak (4060 8gb).

    "D:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-videohelpersuite\videohelpersuite\nodes.py:130: RuntimeWarning: invalid value encountered in cast

    return tensor_to_int(tensor, 8).astype(np.uint8)"

    fadedninnaJan 21, 2026

    @darksidewalker It looks like I can use the 480p version of the model. After changing it to 480p it works as expected :)

    fadedninnaJan 21, 2026

    I was also using sage attention and probably removing it helped too.

    darksidewalker
    Author
    Jan 22, 2026· 2 reactions

    I would recommend you to use WAN 2.2 and maybe a q6 or q5 quant gguf, that's better than WAN 2.1.
    WAN 2.2 can also use lower resolutions.

    72AlcoholFeb 11, 2026
    CivitAI

    What does "Sage Attention, Flash Attention, Radial Attention, Q8-kernel, SGL-kernel, and VLLM are built in, so there's no need to enable them" mean?

    EngelsXIIIFeb 24, 2026· 1 reaction
    CivitAI

    Where can I find "dasiwa-wan2.1-i2v-lightspeed.gguf" ?

    Also this works better than wan2.2 for what I was trying to do ^_^

    darksidewalker
    Author
    Feb 24, 2026· 1 reaction

    Not available as far as I know and at least not from me.

    EngelsXIIIFeb 24, 2026

    @darksidewalker Ahh, it said to put it in the diffusion models in this workflow lol, went searching for it. Keep up the great work <3

    darksidewalker
    Author
    Feb 24, 2026

    @EngelsXIII Thank you! But there will unlikely any update to WAN 2.1 anymore, since the tech is not developed anymore.