CivArchive
    💀💋 DaSiWa-WAN 2.2 I2V 14B TastySin v8 | Lightspeed | GGUF 💋💀 - Q8 Low
    NSFW

    💀💋 DaSiWa-WAN 2.2 I2V 14B TastySin | Lightspeed | GGUF 💋💀

    This is a WAN 2.2 Model: You will need one pair of High + Low.

    Version overview: https://civarchive.com/articles/23495/dasiwa-model-versions-and-timeline


    🔮 Key Features:

    • 🔥 LoRA-Free Generations
      Generate high-quality videos without stacking Wan 2.2 LoRAs (unless you want adding spacial styles/concepts).

    • ☄️Fast: 4 step generation

    • 💫Quality motions (less slowdowns, no pixelated hyper-motion)

    • 🔞 NSFW and SFW + Extreme versatile (more build in concepts):

      • Enhanced anatomy + poses + framing

      • Better understanding of sexual concepts

    • 🪄 Better prompt responsiveness

    • 👘 Better understanding of anime/manga style composition

    • 🪡 Q8 (FP16 base) precision

    • 🚫 Do not use any extra speed-up (low step) LoRAs, this is baked in already

    ✅ Optimisations

    • 🌟 CFGZeroStar patch (better results and prompt adherence)

    • 🍰 Baked Latest Distillation (r64-1022)

    • 🍀Additional concept optimisation (compared to MidnightFlirt)

    • 🎬 Reward Attention (more realistic movements)

    • ✨ No or extreme low transformation of details with anime/realistic images (lips, eyes, ears, breasts, genitals, ...)

    • 🌠 Refined motions (compared to MidnightFlirt)

    • 💌 It can guesstimate better details out of frame (compared to MidnightFlirt)

    • 💫 As close to RL motion-speeds as I could get with speed-up tech

    • 🍌Even less tries for good results (compared to MidnightFlirt)

    • 🧩Even better compatibility with LoRAs (compared to MidnightFlirt)

    • 🖼️ Usable with your preferred/custom CLIP (if compatible)

    • 😵‍💫Additional reduced hallucinations (compared to MidnightFlirt)

    • 💠 Zero prompt results capable

    • 🚫Excluded CLIP


    🍒Workflow

    Make sure to checkout my easy to use Workflows!


    🍄LoRA's

    Try first without additional LoRAs!

    But: This checkpoint is not meant to replace all LoRAs, it is meant to:

    • Perform better overall at his own

    • As easy as possible to use

    • With LoRAs to be absolutely awesome


    🪧Announcement

    ⚠️ Read the corresponding announcement.

    📢 Make sure to check it out for in-depth information and a complex comparison!


    🆕 New to WAN 2.2 I2V? - Check out my guide.


    • Steps: 4

    • CFG: 1

    • Sampler/Scheduler: Euler/Simple, UniPC_BH2/Simple

    • Resolution up to 720p (native quality).

    • Add other LoRAs with 0.3 - 0.6 at first

    • 16 or 24 fps, 81 or 97 frames ~ 5s


    Dependencies


    💫 Speed + Examples

    Q8 checkpoint - On 16GB VRAM, 64GB RAM, 4 steps, cfg 1, 81 frames

    • 368p: 120 sec

    • 480p: 160 sec

    • 576p: 220 sec

    • 608p: 340 sec

    • 672p: 680 sec

    • 720p: 730 sec

    • Most examples are without any additional LoRAs

    • With LoRAs are for testing the compatibility

    • Initial anime-like example images made by me are also made with my model 🗡️💀 DaSiWa-Illustrious-XL 💀🗡️

    • Other models for realistic reasoning


    🩻 Known issues

    • 🫦 The most delicious sin!

    • Tell me 🫵🫢


    Approximate expected quality from quantization

    This are my tests compared to a full fp16 safetensor checkpoint taking prompt- and visual satisfaction into account on my DaSiWa checkpoints.

    Quantization - Quality estimation

    • Q8 ⭐⭐⭐⭐⭐ ~ like FP16/FP8+, excellent results

    • Q6 ⭐⭐⭐⭐ ~ like FP8, very good results

    • Q5 ⭐⭐⭐ ~ good to very good results

    • Q4 ⭐⭐ ~ medium-good results

    • Q2 ⭐ ~ poor results, only use if you have to

    ⚠️ Do not compare this with the unofficial quants of my checkpoint made by others, they are based on FP8 and not FP16 like my quants.


    🩺 Fixes & Feedback

    • If you use LoRAs, try to respect the LoRA training triggers and try some versatile descriptions, most LoRAs will work with 0.3-0.6 (start with 0.3)

    • Raise LoRAs in little steps +0.1

    • Do not mass add LoRAs, just add 1 or 2 (x2 High+Low)

    • Negative prompting do not work with cfg 1, that's a limitation of speed-ups with cfg 1 (except you use NAG)

    • Low resolution (e.g.384x576) are for fast samples and will blur fine details, do a higher resolution if you want clear details

    🪧❗ Test your comfyui-backend with this absolute basic test-workflow before asking about errors.


    🖤 Why I Made This

    I was tired of using all these massive list of LoRAs, just to get a remotely good result after 10 generations, consuming hours of time.
    So I can just make my videos with 1 or 2 concept LoRAs without pushing 6 till 10 LoRAs (Low/High) into a generation.


    This checkpoint is also my personal playground.


    Closing words

    🤩 I want to thank all the fantastic other creators who made super nice LoRAs and concepts to play with! Support that awesome creators by using their LoRAs and post to their gallery and share the meta-data!
    ⚠️ I made all this with permissions or open-source resources (the time it is incorporated).

    I share as much insights as I can without compromising my work. I'm doing this for fun as my hobby and just do not want my hobby to be destroyed.

    More details can be obtained in the corresponding announcements!


    If you would like to contribute in my awesome (😉) checkpoint or willing to share resources I'll gladly give credit! Just contact me!

    ✅ All credits / resources are mentioned inside the announcements! - Since different versions may have different resources.


    YOU are responsible for outputs as always! If you make ToS violating content and I get aware I WILL report this.


    Disclaimer

    This models are shared without warranties and with the condition that it is used in a lawful and responsible way. I do not support or take responsibility for illegal, harmful, or harassing uses. By downloading or using it, you accept that you are solely responsible for how it is used.


    Custom License Addendum: Distribution Restriction

    Notice: Notwithstanding the base license selected for this model, the following restrictive terms apply:

    1. No Redistribution: You are not permitted to host, mirror, or redistribute this model (checkpoint, LoRA, or Safetensors files) on any other platform, website, or service (including but not limited to Hugging Face, Tensor.art, or SeaArt) without explicit written permission from the creator.

    2. Attribution & Source: This model is officially maintained only on Civitai or other platforms where I explicitly own the repository. To ensure users receive the correct version, updates, and safety metadata, please point users to the original URL.

    3. Usage: All other rights regarding the use of the model for image generation remain as per the terms and the restrictions provided per model.

    Description

    Q8 Low

    FAQ

    Comments (204)

    ThatSoKittenDec 8, 2025· 1 reaction
    CivitAI

    Anyone know how to run this on a 7900XTX 24gb varm, with 32GB Ram? Im using the new pytorch drivers, Im seeing cards with 8-16gb and less ram run these great, is this just a Nvidia to AMD difference from Cuda? or am I doing something wrong? I can run the Q5H/L once than I need to close and reopen comfyui to run it again or it OOM/Freezes or Crashes or wont do anything and gets stuck on a step.

    darksidewalker
    Author
    Dec 8, 2025

    Your card should be able to run the Q8 without any issues. Just from the specs. But I'm not familiar with comfyui and AMD and the dependencies, so I can not help with what's wrong, but I'm really sure it is the installation and not the model. Gguf should run on AMD without issues, so I heard.

    ThatSoKittenDec 9, 2025

    @darksidewalker I'm not sure what i'm doing wrong, If it helps any my Vram on the Q5H/L goes to 21.9GB/24GB, and the system ram goes to around 19.2GB out of 32GB, Here below is a small snippet on the first click run, if there is anything wrong let me know, i'm not an expert, I just follow guides and hit run...

    got prompt

    Using split attention in VAE

    Using split attention in VAE

    VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16

    Memory profile cache hit for cuda_30303030-3030-3033-3030-303030303030::768::528::1280::880::3::fp32::linear: optimal_batch_size=1, max_safe_chunk=2048

    Using scaled fp8: fp8 matrix mult: False, scale input: False

    Requested to load WanTEModel

    loaded completely; 95367431640625005117571072.00 MB usable, 6419.48 MB loaded, full load: True

    CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16

    Requested to load WanVAE

    loaded completely; 12453.42 MB usable, 242.03 MB loaded, full load: True

    gguf qtypes: F32 (2), F16 (693), Q5_K (280), Q6_K (120)

    model weight dtype torch.float16, manual cast: None

    model_type FLOW

    Requested to load WAN21

    0 models unloaded.

    loaded partially; 9693.67 MB usable, 9668.01 MB loaded, 756.68 MB offloaded, 17.20 MB buffer reserved, lowvram patches: 0

    0%| | 0/2 [00:00<?, ?it/s]F

    ThatSoKittenDec 9, 2025

    Okay, with Q5H/L, and a few attempts, ive created a video in 31 mins, After that I hit run again to see if it breaks, or gets faster, or if it crashes something again, this time it took only 485 secs. Im now going to download Q8H/L and see if it works.

    ThatSoKittenDec 9, 2025· 2 reactions

    Working good now, It ran Q8H/L without issues. Thanks for model! <3

    R3G4LDec 12, 2025

    @ThatSoKitten What resolution and times are you getting on Q8? :)

    ThatSoKittenDec 13, 2025· 1 reaction

    @R3G4L takes me around 250 to 330secs, 81 frames, 528x768 - 5 sec video

    qekDec 13, 2025

    @ThatSoKitten Maybe you should try lighter VAEs https://huggingface.co/lightx2v/Autoencoders
    some go to the vae folder, some to vae_approx. ComfyUI supports them all

    ThatSoKittenDec 13, 2025

    @qek ill try it!

    darksidewalker
    Author
    Dec 14, 2025

    These VAE from lightx2vid are not good or lets say very bad with WAN 2.2.
    The quality will degrade extremely on my testing and there is no measurable speed-up.

    qekDec 14, 2025

    @ThatSoKitten try them* There are three encoders available

    ThatSoKittenDec 14, 2025

    @darksidewalker I see this, made it super bad

    qekDec 15, 2025

    @ThatSoKitten Looks ok to me, but I can use original VAEs with no problems anyway. Kijai said the light autoencoders did not save a lot of VRAM/RAM compared to the originals

    darksidewalker
    Author
    Dec 15, 2025· 1 reaction

    @qek I did some test samples, and the results were like the video got heavily compressed with block-artifacts.

    qekDec 15, 2025

    @darksidewalker Tried three light VAEs for 2.1, terrible quality, I am removing them

    matsudamacDec 12, 2025· 1 reaction
    CivitAI

    it has been very usefull thanks! it takes a litle longer than espected, i have a 1600 super i know, i know, its not good for this. but from like 40 minutes came dow to like 15 unless i add loras and put cfg on 2.0 i have 32 gb ram so it helps i think, but if you have any ideas about how to do it faster please let me now, also im using the q4 version btw

    darksidewalker
    Author
    Dec 12, 2025

    Thank you!

    Inside my guide are some information about speed-up ,also regarding your GPU.

    https://civitai.com/articles/20293/darksidewalkers-wan-22-14b-i2v-usage-guide-definitive-edition

    But your GPU is just not suitable for AI generation, that's why you got massive times.

    matsudamacDec 12, 2025

    @darksidewalker yeah... You're right I'm thinking to swap to a 3060 with 12 vram, idk how much it will help, but I hope it gets from 30 40 (with Loras anda cfg, still less than before haha) to like 10 15 or something like that and probably with a q5 or q6

    UncleJertDec 16, 2025

    @matsudamac I'm using a 3060 12gb with 32gb of system ram. If the models are already loaded I can get a 5 second 720x720 in 6 minutes with the fp8 version.

    matsudamacDec 22, 2025

    @UncleJert really?! Then I will buy one in probably some weeks I have to save some money for it, but do you put cfg too? Or Loras? Ando how many steps do you use? I usually use a 4 or 6 steps, I followed @darksidewalker advised and checked the guide, now o see that my GPU not only it's not suitable for the ai technically it should not work haha. But still with Loras anda everything it's like that sometimes 40 and it could get to be more of I put more frames. But what do you mean about the models being already loaded?

    UncleJertDec 22, 2025

    @matsudamac 1.0 on CFG and 4 steps across two samplers. I never have an issue using loras. Only tend to use 2 loras at most though. When I say the model is already loaded, I mean that it's the second gen. Already warmed up is all. But I definitely recommend the 3060! 👍

    RenessanceDec 14, 2025· 1 reaction
    CivitAI

    Guys, can someone upload the encoder from the author's description to Google drive? I think Internet operators block downloads from rusurs :-(

    qekDec 15, 2025

    Is Cloudflare Storage banned in the country?

    HoneyphoriaDec 14, 2025· 3 reactions
    CivitAI

    Bro, first of all, great job! I bought early access, i hope my 6 buck will make your life more enjoyable, you deserve it for sure hah😁 I need your help: eyes on final generations are always slightly blurry. I'm generating at 720p with 6 steps, 1 high noise, 5 low. What can i do to improve fidelity? Thanks for the answer, bro!

    HoneyphoriaDec 14, 2025

    UPD: I think i should try to set up sigma shift on low noise to something around 8-9

    darksidewalker
    Author
    Dec 14, 2025· 1 reaction

    @Honeyphoria Hi! Thanks!
    I would try shift 8 first and maybe not 1:5, try with 2:4 or 3:3 with 720p this should be good to go for fine details.

    vicautDec 15, 2025· 1 reaction

    try 3:6, sigma high 4.5, sigma low 5

    JellaiDec 16, 2025· 1 reaction
    CivitAI

    This looks really cool. I really wanted to see how people were prompting it, but it's weird... for whatever reason, on DaSiWa specifically, the users of it don't like to show their prompts. I've never seen such a high percentage of hidden prompts in a user example set. I wonder why that is.

    darksidewalker
    Author
    Dec 16, 2025· 1 reaction

    Don't know, my example are all with meta and prompt. Some stuff I just don't want to share, but almost all from my work is with data. Maybe some of the early adopters don't want their prompts/work copied. But it is legit if a creator doesn't want to share their individual workflow.

    JellaiDec 16, 2025

    @darksidewalker Yeah, nothing against any individual that does it. I just found it interesting that this model built a little culture around it.

    zzozzDec 16, 2025· 2 reactions

    When uploading I2V results, civitai doesn't automatically fill in the prompts. You have to enter them manually, which is a bit annoying.

    hereandnow931Dec 17, 2025

    The only person that can give accurate prompts is the author since he is the only one that can create captions for the checkpoint. Only the creator of the lora or checkpoint knows the trigger words. The users are just guessing.

    JellaiDec 17, 2025

    @hereandnow931 If I understand you correctly, "accurate" is not the same as "official". I mean, if someone used a prompt the checkpoint creator didn't think of, and it made the checkpoint do something that is surprising or cool, the only accurate prompt would be the prompt they used. Checkpoint creators don't know every single token of their models, and a user's guess can still end up with useful results. It can be productive to guess on a checkpoint, though maybe not as much on a lora.

    JellaiDec 17, 2025

    @zzozz It is... I guess that makes the culture around other checkpoints and loras more interesting, that so many people do put it in manually. 

    darksidewalker
    Author
    Dec 17, 2025· 2 reactions

    Guys, this is not a LoRA, there are no trigger-words. It is a full checkpoint with natural language understanding. You can describe what every you want, if WAN 2.2 and my finetuned checkpoint will have an adequate answer to the prompt is a other thing. But I think you mixing things up here.
    The persons who made initial samples just did not want to share their prompts, do not over think this.
    If times comes and more samples, like on my other checkpoints, there will be more people with metadata.
    Also where is the fun if one just copies a prompt from another without thinking of a unique way to use the model?
    Just calm a bit, the model just got free today XD

    vicautDec 17, 2025
    CivitAI

    Is the Mystic XXX Lora integrated in this checkpoint?

    nanalayaDec 17, 2025
    CivitAI

    V7 checkpoint has been rocking so far. I hope this will as well.

    darksidewalker
    Author
    Dec 17, 2025

    Try, post something and tell us :)

    Elmer588Dec 17, 2025
    CivitAI

    Great checkpoint thanks a lot ! A question though, whatever the prompt, the character keeps "wobbling" as if dancing/shaking. Is it normal ? How can I make them stand still or follow the action prompted without this pollution movement ?

    darksidewalker
    Author
    Dec 17, 2025· 1 reaction

    As you seem new to AI video gen and have no posts at all, I can not give you any advice, what you are doing wrong.
    But, there are plenty examples with prompts, that do not have this problem.👍

    Elmer588Dec 17, 2025

    @darksidewalker I did not post indeed, but I have some experience. I'm using your last workflow without any Loras.

    darksidewalker
    Author
    Dec 17, 2025

    @Elmer588 That is nice to hear, but from the tools you use I can not see, what you are doing.
    So how can I give you an advice out of the blue.
    I think it is maybe just a prompting issue.

    Elmer588Dec 17, 2025

    @darksidewalker Thanks a lot for your time, I will test different prompting and see if it persists !

    4734360Dec 17, 2025

    Works great, but I can also report this hip sway thing.

    Elmer588Dec 28, 2025

    @darksidewalker I understood where the wobbling came from. I was using "res_multistep/beta" from your workflow but then I checked this page and saw your recos. It's much better with "uni_pc_bh2/simple". Maybe update the recos on your the non GGUF model page if it's the same setting ?

    darksidewalker
    Author
    Dec 28, 2025

    @Elmer588 on my experience euler/simple , unipc_bh2/simple and res_multi/beta all work well. It is just preference I think, none of these did wobbling for me. But maybe it is different for other setups.

    solidframegaming301Dec 17, 2025· 9 reactions
    CivitAI

    can we get the non gguf versions?

    qekDec 17, 2025

    I think he said no

    darksidewalker
    Author
    Dec 17, 2025· 1 reaction

    So funny 😄
    If I do a safetensors ... "We want gguf" ...
    If I do all these gguf's ... "We want safetensors" ...

    squiddy3Dec 17, 2025

    @darksidewalker I too would like safetensor. It would be much appreciated! :)

    RadomAccessForDec 17, 2025

    safetensors compile by torch, For me, this way the speed is actually faster

    flickerleafDec 18, 2025

    I'm also interested in the Safetensors version; not all applications have the functionality to use gguf. I sincerely hope you can release the Safetensors version.

    SaxliveDec 17, 2025· 2 reactions
    CivitAI

    I tested the q8 gguf, and it works really well.
    The quality and performance are impressive.
    Thank you so much for creating and sharing this!

    Frgmt80Dec 17, 2025· 1 reaction
    CivitAI

    Holly shit the generation is so fast!

    CuajonetaDec 18, 2025· 1 reaction
    CivitAI

    Amazing! I'm speechless with gratitude for your models and workflows. They run perfectly, fast, and without any OOM errors when using the Q8 with 16GB of VRAM. No problems with or without --use-sage-attention in ComfyUI. For someone who only dabbles in AI as a hobby or to kill time because I'm terrible at hand-drawn animation, you've left a wonderful gift to bring my drawings to life. The Q8 is actually faster than the Q6 for me! XD With all the other models and workflows, I have to waste hours fixing OOM errors or other bugs that always seem to pop up. Your work is impeccable and highly appreciated.

    darksidewalker
    Author
    Dec 18, 2025

    Thank your very much!

    Z_chanDec 18, 2025· 1 reaction
    CivitAI

    smoothness and quality compressed into the fastest checkpoint i've used yet. q8 works perfectly despite my system being rated at q6 on git. thank you for your contributions to the future of Art 💜

    UsedTissuesDec 18, 2025
    CivitAI

    I need some advice on how to make the most out of this. No matter how much I prompt or specify the intensity or speed of the desired movement I want in my clip, the output always looks incredibly rigid with anime style images.

    If I swap to the other checkpoint I use, which is this one: https://civitai.com/models/2053259/wan-22-enhanced-nsfw-or-camera-prompt-adherence-lightning-edition-i2v-and-t2v-fp8-gguf

    I get the desired speed/intensity of motion that I want.

    darksidewalker
    Author
    Dec 18, 2025

    The "speed enhancement" of this checkpoint comes at costs (blurry faces, less details, morphing, lower prompt adherence, low compatibility, ...) I know why that is, but that's an other story.

    My checkpoint priories quality over artificial fast motions just for its sake. So you will never get jiggle, wiggle, super fast boom-stick motions with this one, except I find a way to maintain my quality standards.

    That said, there are techniques to speed up things:
    - Higher FPS (24)
    - Motion LoRAs (that will sacrifice your details)
    - Refined prompting

    But the question is: If you get what you want from the other checkpoint. While everything with mine is so "incredible rigid" ... Why do you not just use the other one?

    UsedTissuesDec 18, 2025

    @darksidewalker I didn't mean to make it sound like I didn't like your checkpoint, because I do like it. One of my more recent video uploads used it. I was using Midnight Flirt before this one became public, and I was just wondering if I was doing something wrong because I'm still fairly new to I2V generation

    darksidewalker
    Author
    Dec 18, 2025

    @UsedTissues Okay, sorry than. I got not much sleep. Maybe I execrated. 😩
    The thing is, I hear that complain every time, someone is coming from the other checkpoints and I did virtually tried them all, sure some merge crazy motions, but than every detail is trash.

    I can not provide any other tip as above, it is not made for that. I'm optimising every time I can, maybe the next iteration will yield some surprise there, but is a process and quality is my 1st goal. 😊

    UsedTissuesDec 18, 2025· 1 reaction

    @darksidewalker No worries man, I got like 2-3 hours last night myself, lmao. I do really like your DaSiWa checkpoints, they are very good quality, and on the page for the TastySin checkpoint, my favorite showcase vid is the one where the girl in the black dress does a quick half-spin and it turns to white, the way her hair moves in that clip is amazing!

    darksidewalker
    Author
    Dec 18, 2025

    @UsedTissues Thanks! Yeah that is so fluid :)

    lug_LDec 18, 2025
    CivitAI

    Hi, I love the model. I wanted to ask if you have one like this with 4 steps for ANIMATE. If so, could you share the link to which one it is? 😊

    darksidewalker
    Author
    Dec 18, 2025· 2 reactions

    Thank's!
    I did not do an WAN-Animate yet

    lug_LDec 18, 2025

    @darksidewalker  I’ll wait then, thanks for replying. 😊

    gambikules858Dec 19, 2025
    CivitAI

    speed is the same like all others models. doesn't expect generate 4s in 160s. 5min for 720x720

    FloatsYourStoatDec 19, 2025· 1 reaction

    You might be doing something wrong, it takes me an average of 180-200 seconds to gen 81 frames at 720x720 on a 5060 ti 16GB. With --use-sage-attention and --fast fp16_accumulation

    PotatoChipOwODec 20, 2025· 6 reactions
    CivitAI

    Q6 GGUF is awesome! Now in an 8GB VRAM + 32GB DRAM setup, I can generate animations while doing other things at the same time. 😄

    darksidewalker
    Author
    Dec 20, 2025· 3 reactions

    I know what you are doing! ~ I just can not prove it 😋

    xpnrtDec 20, 2025
    CivitAI

    can we get a q3ks as well ? I do gguf conversions of my wan merges and while q2 really is bottom of the barrel, q3ks suprisingly holds out well

    darksidewalker
    Author
    Dec 21, 2025

    I'll note that for the next release :)

    RavirKunDec 20, 2025· 3 reactions
    CivitAI

    This model is surprisingly amazing. My PC can usually only handle Q4, but I tried Q5 and then Q6 today — and it's still fast, no OOM at all. I even generated while multitasking: browsing Facebook, running other tabs, and watching anime, and everything ran smoothly.

    Tomorrow, I’ll try testing Q8. Thanks so much for this model!

    My PC is 3060 12GB and 16GB RAM

    qekDec 20, 2025

    You don't have to if Q6 works well, but feel free to test it

    RavirKunDec 21, 2025· 2 reactions

    Already tried the Q8 version today, and wow — I'm shocked I can still generate it without any issues. No OOM at all, even while multitasking. Only a 2-minute difference in generation time: Q6 takes about 10 minutes, Q8 takes 12 minutes at 720p resolution.

    This is impressive as hell. Didn't expect my setup to handle Q8 this smoothly!

    matsudamacDec 22, 2025

    @RavirKun I'm sorry to ask but I'm thinking of buying a 3060 with 12 gb ,how many Loras do you use? What amount of cfg and that's 5 seconds video on 10 minutes?

    RavirKunDec 22, 2025· 2 reactions

    @matsudamac It doesn't matter how many LoRAs I'm using — the speed stays the same. 1 CFG, and yeah, still 10 minutes for a 5-second clip at 720p.

    AtalimzDec 21, 2025
    CivitAI

    I'm using your model but not your workflow (It seems to stall and get stuck on a CLIP node when generating at higher resolutions for me), but I keep getting unwanted mouth movement :(, It might be my workflow (One of the older easy wan workflows) but i'm not sure what would be causing it.

    darksidewalker
    Author
    Dec 21, 2025

    I don't know what clip and how your workflow works, but the mouth movement comes if you do not prompt enough.

    jenniferhoustonpancakesDec 21, 2025
    CivitAI

    A modest speed improvement, maybe 10-20% on a RTX 3060 12GB. Can make Missionary POV videos without LoRA help.

    TurboWan offers much greater promise.

    Update: It gets great results with FFGo! You can make it fuck anyone!

    darksidewalker
    Author
    Dec 21, 2025· 1 reaction

    You know that the speed-up is not the 'only' point of the model right?
    10-20%? Did you even made a standard WAN 2.2 video with 20-30+ steps?
    TurboWAN can not do what this checkpoint does and the quality of Turbo is, well underwhelming. What sense makes speed without quality.

    Well, you are free to use TurboWAN instead hand have fun :)

    SaxliveDec 21, 2025

    Sorry, quick question — what does ffgo mean?

    @darksidewalker TurboWAN is bleeding edge. Have not used it. Claims 100-200x speedups which can of course be offset with a much larger number of steps.
    https://huggingface.co/TurboDiffusion

    atm it does not have ComfyUI integration nor offloading/Distorch capability. So I can't use it. But in a few months it will probably be what everyone else uses.

    However in the meantime this one gives slightly superior quality and speed to the Wan base models (and also, unexpectedly, works better with FFGo) so that is what I will be using for now. I'm surprised there aren't more generations listed below.

    jenniferhoustonpancakesDec 22, 2025· 1 reaction

    @Saxlive FFGo is a LoRA for Wan 2.2 that improves use of an initial reference image to encourage consistent characters.
    https://arxiv.org/html/2511.15700v1

    Pretty much it means that you are using a reference image then immediately transitioning to the actual video. And the video it makes will more closely match that than it would without the LoRA. An alternate way to making a video to using a defined start/end frame for character consistency.

    darksidewalker
    Author
    Dec 22, 2025· 1 reaction

    @jenniferhoustonpancakes Well if that is going to happen, I'll gladly adopt that technology. But for now it is just not good.
    Would be awesome, but the quality is so low, that the speed-up does not remotely pay off for that.

    gurusarrasDec 23, 2025

    @darksidewalker What would you recommend for hybrid use with a rtx 2070 8gb +16gb system ram?Turbo or this if quantized low enough?What's the best video gen ai I can use?

    darksidewalker
    Author
    Dec 23, 2025

    @gurusarras Turbo is trash atm.

    Maybe Q4 or Q5, you can just try what works for you.

    DurolithDec 22, 2025· 1 reaction
    CivitAI

    Can you use LowQ5 with high Q8?

    darksidewalker
    Author
    Dec 22, 2025

    In theory you can do that, but if this produces good results? - I don't know. Never tested mixing quants.

    qekDec 23, 2025

    Should work, but I'd keep the refiner with more bits (Q8)

    RedditUser981Dec 27, 2025· 1 reaction

    its working i am using q2 high with q8 fp8 low getting fast results

    darksidewalker
    Author
    Jan 4, 2026

    If you can use q8 at any point why even using q2. The models are not loaded at the same time through.

    reportals21Dec 25, 2025
    CivitAI

    I have a problem, when I gen a nsfw clip the meat rod doesnt go inside completely, only the tip, no matter which lora and prompt I've used.

    That's because people make real porn to show off their dick size and the fact that they get to have sex rather than make something immersive and useful that people can get off faster with. That's why there are so many side views and anything else that is not POV Missionary.

    The LoRAs are all trained on that, so of course it's not going to be all the way in on the generated stuff. Maybe someone (maybe me) can get their sex dolls and make some POV Missionary stuff with full penetration to train LoRAs on.

    Your best bet is to start from an image with the penis all the way in. Then if you want similar images, then either mask it being all the way in or low denoise because that's so hard.

    stylobcnDec 28, 2025
    CivitAI

    Thank you so much for your contribution, the videos really do look great. I have a question: is there any difference in quality or movement if you use the UnetLoaderGGUFDistorch2MultiGPU node? Thank you very much :)

    darksidewalker
    Author
    Dec 28, 2025· 1 reaction

    Thank you 😊

    Unfortunately I can't answer your question. Maybe someone else?

    stylobcnDec 28, 2025· 2 reactions

    @darksidewalker So far, the videos look good, etc. At least using the UnetLoaderGGUFDistorch2MultiGPU node for the models (when I downloaded your workflow, I changed it directly since those are the nodes I was already using, and I also added Sage and FP16 accumulation xD), I can make videos at 145 frames in 720p and 30fps with an RTX 3060, 12GB VRAM, and 64GB of RAM. Next, I have to try some of your Checkpoints for Illus, which look really good, hehehe. Thank you very much for your effort and for sharing. P.D: Sorry if my English isn't very good, I use Google Translate. xD

    stylobcnDec 28, 2025· 1 reaction

    @darksidewalker I forgot to mention that I use Q8; I used to use other Q8, but yours look better :)

    pafu462Dec 30, 2025
    CivitAI

    Thanks,

    Im having a strange problem:

    The first generation after comfyui is started is fine, then the next ones are just static noise o_0

    Im using the Q5 version, and im on Electron version of comfyui. I also turned off the 2.0 nodes comfyui setup

    It seems to be on my end. Could a reinstall of comfyui maybe solve it?

    darksidewalker
    Author
    Dec 30, 2025

    Hi, please make sure to use high and low in the correct loader, if one is missing or are they mixed up it will produce noise.

    Also make sure to install all dependencies.

    h3x33421Jan 3, 2026

    I have the same problem before, since I have to restart ComfyUI after each generation to avoid that strange noise video. Increasing virtual memory to 32GB resolved the issue.

    Devilday666Dec 30, 2025
    CivitAI

    No Q3?

    darksidewalker
    Author
    Dec 30, 2025

    As you can see.

    Devilday666Dec 30, 2025

    @darksidewalker How come?

    darksidewalker
    Author
    Dec 30, 2025

    @Devilday666 You ask - Who decided that?

    ~ Me!

    Peking1Dec 31, 2025
    CivitAI

    Hey, not sure if you'd be able to help with a question. I see comments of people being able to use Q6 easily on 8GB VRAM and 32GB DRAM.

    My set up can generate using Q6 too, I have 16GB VRAM and 32GB DRAM. But it only loads WAN21 partially so it takes FOREVER and my PC lags hard so I can't do much during it.

    Am I unaware of another setting or something that causes me to have to only partially load?

    PotatoChipOwOJan 1, 2026

    Try increasing the Windows virtual memory (pagefile). I am using 8GB VRAM + 32GB DRAM + 128GB virtual memory.

    Use the UnetLoaderGGUFDisTorch2MultiGPU with cuda:0,1gb;cpu,* expert allocations. Offloading has minimal impact on video inference speed, and it can make 20 second videos on my 3060 12 GB in about 15 minutes with sage attention, although they seem to lose coherence after about 8 seconds. i.e. they take the clothes off then they morph back into place.

    vicautJan 2, 2026

    This also depends on your chosen resolution.

    vicautJan 4, 2026

    I am using 8GB VRAM + 32GB DRAM with Q8 without any problems, even with 750k pixels. maybe your pagefile is too small (mine is 70 GB+).

    Oh, I didn't notice the 32GB DRAM. I still remember as recently as just a few months ago when I bought four 16 GB sticks from Temu for about $5 each after discounts and I was worried I bought too much. Now I'm starting to have to restart ComfyUI if RAM won't go below 30% use (it has huge memory leaks) and I'm starting to look at server boards instead. The high capacity RDIMM modules go for less than half as much.

    32GB isn't enough. Even 64 struggles with Wan. If you have a 4 slot board, see if you can find whatever you can, even if you have to use SODIMM converters, have mixed sizes and/or use way slower RAM.

    If you can use a second video card, you might be able to offload some of it there. Distorch can do multiple offload sources, and there are also some multi-GPU nodes I haven't tried.

    darksidewalker
    Author
    Jan 4, 2026

    @jenniferhoustonpancakes Is this a joke? RAM on TEMU for $5 and you expected that it is not fake RAM?
    The model runs on 32-64 GB RAM fine without the need to offload.
    If you have to restart anything under 30% RAM your RAM is faulty. Mine runs like 99% full with some settings and no problem afterwards.
    You should run a mem-test to verify your RAM!

    @darksidewalker The RAM is fine. Already used memtest a long time ago. This particular machine runs in overlayroot and does not use a swap file by design, only zram. It's possible with an actual swap file it may be able to handle a larger amount of memory leaks before I need to restart ComfyUI. Nevertheless RAM use in ComfyUI builds up over time and I haven't found a node that can decrease that. The RAM-Cleanup node just freezes it. Wan (especially WanVideoWrapper) and Nunchaku are the worst culprits.

    The $5 was no joke, the base cost for two 16 GB GMOG sticks a few months ago was $44, but after the claimcredit/wincredit discount most of that is refunded as store credit. You do have to remember to check it every single day for a week though. I've forgotten twice and that's cost me about $200 this year. They have EAGET sticks now you can get for a base price of about $100 now, not as good as the GMOG kind, after claimcredit you would be spending about $40.

    I would not try to overclock them though. Some of the GMOG sticks I've gotten say Micron as the manufacturer, and I speculate they are probably rejects from the big brands that aren't stable under their standard timings that China gives higher timings to give them a new life.

    darksidewalker
    Author
    Jan 4, 2026

    @jenniferhoustonpancakes Well, okay.
    Just something to add. The "clean" nodes are all placebo or introduce memory leaks. Comfy itself stated that. There are some unload nodes that work for checkpoints. But stay away from that "cleaning" nodes.
    WanVideoWrapper is a monster, also it uses much more resources than the standard sampler/loader and the offloading sometimes make things worse than the standard memory management. I really do not get why ppl are eager to use this. It may be necessary for some bleeding edge things like SVI, but on normal circumstances I would avoid it at all.

    @darksidewalker I actually chain three different VRAM cleaning nodes between KSampler and VAE Decode and it's able to prevent OOMs on the latter to some extent. Not sure which one(s) actually work.
    The only reason I see to use WanVideo Sampler is if there are embeds it's unclear how to, if even possibly, use in native KSampler. That means BindWeave and to a lesser extent MAGREF. However today I've found that Wan 2.2 S2V works even better than either of those (I've also had success with the more complicated Animate though it needs a source video) so there's really no reason to use Kijai's nodes at all.

    It's able to preserve the narrow waist on characters and there still isn't a good LoRA for small waists, I can chain the Petite Hourglass at 0.50 and Teen Titans (yes, really) at 0.75 but more LoRAs = more steps for convergence. I haven't had any success at all at having Wan generate supermodel waists without LoRAs, but I actually did manage to with Qwen using "Disney character proportions" in the prompt.

    vicautJan 2, 2026
    CivitAI

    Will there be a GGUF of version 8.1 too? Btw I recommend 2 hi and 4 low steps with euler beta57 with shift 5.7.

    darksidewalker
    Author
    Jan 4, 2026· 1 reaction

    8.1 safetensor = 8.0 gguf pretty much
    No need to do extra version here.

    ecchi_goonerJan 5, 2026
    CivitAI

    Awesome model, somehow CivitAI does not recognize my metadata, even though the video combine node has "save metadata" set to true.

    Is this an issue on my end or civit?

    darksidewalker
    Author
    Jan 5, 2026

    This is civitai unfortunately.

    supercoJan 5, 2026
    CivitAI

    HELLO, 5060ti 16g / 5070 12g

    which one is better?

    hazzoom82659Jan 5, 2026

    I would recommend the 5060Ti the 16 GB VRAM :

    Ai state of art work prefers more the bigger VRAM, to load more amount of models & LoRAs faster, also sys RAM if bigger it is better,,

    5070 may have better processor BUT its 12 GB VRAM will require more work from you to adjust workflows to use lower VRAM & also you may need models that can work on lower VRAM cards ... etc

    darksidewalker
    Author
    Jan 5, 2026

    Agree. For AI more VRAM is always better. Everything under 16GB is hard to use.

    kennysladefan293Jan 24, 2026

    As an example, if you're generating 720p video, 12GB VRAM will limit you to around 65 frames maximum, but with 16GB VRAM you can render 89 frames at 720p. I have a 12GB RTX 4070 Super, an RTX 5060ti 16GB and the best of both worlds, an RTX 5070ti :)

    m9xeng850Jan 5, 2026· 1 reaction
    CivitAI

    First of all - Thank you for your work, you've been killing it! One maybe stupid question: How come that Q8 version (14.3 GB) weighs more than your original non GGUF TastySin 8.1 (13.5 GB)? Does it deliver better quality or something? Once again, thanks for your checkpoints, love it <3

    darksidewalker
    Author
    Jan 5, 2026· 1 reaction

    GGUF tend to be larger due to it storing extra data in higher precision. Compared to simple fp8_scaled it is slightly better. Compared to fp8 mixed precision it is equal while the safetensor is more memory efficient and less demanding when adding loras.
    I would always recommend safetensor over gguf, imho.

    sarradetjJan 6, 2026
    CivitAI

    Hey, so the model is amazing, but i am getting really low contrast and grainy videos when i use the recommended settings with UniPC_BH2/Simple as the low noise and Euler/Simple as high at 4steps, 1CFG. Is it just me or is there something you are doing to get the right color output? i have to raise steps to 8 to get good color, but that's double rec...

    darksidewalker
    Author
    Jan 6, 2026

    Hey, make sure to use the high and low model in the correct loaders.

    kennysladefan293Jan 24, 2026

    I'm getting the same results as you.

    ree89Jan 6, 2026
    CivitAI

    I found that res-multidtep+beta is better than unipc-bh2+simple

    darksidewalker
    Author
    Jan 6, 2026

    Depends on the scene and the result to be achieved. Res_Multi can overdo some things slightly or add unwanted motions.

    ree89Jan 13, 2026· 1 reaction

    @darksidewalkerAfter my long-term use, I found that res-multi has fewer motion effects and the picture looks more stable, while unipc has more motion effects. In fact, your model performs extremely well, comparable to keling's closed-source model💕

    gerry3kJan 8, 2026
    CivitAI

    This model works amazing so far. Well done! I just noticed that if I only move the camera around a character, then the character starts to shake it's hips. When I add a simple action (like adjusting hair), it stops shaking the hips for that period of time.

    Is there any prompt magic to have a more or less static character that doesn't do this motion?

    civitaisks777Jan 16, 2026

    describe the motion you want them to have, like when you specified adjusting hair. if you want static, you can try something like "[camera movement] the girl adjusting her hair as she stands still in a [description] pose throughout the video"

    i haven't done camera orbits, so you may need to play around with wording. you can also use NAG nodes to enable negatives for things like "shaking hips" if the above doesn't help enough

    MostimaJan 9, 2026
    CivitAI

    Hello! I encountered a problem when using the model: I prompted the decoder to make an error when saving the video, and then I followed the instructions of GPT to switch the saving format to mp4. The result was very good, but at some point the video suddenly became very bright? I feel very strange.I just replaced tile vae to ease the VRAM problem.(This is machine translate,my english is not good)

    SHAMELESS_SLUTJan 11, 2026
    CivitAI

    Not sure if it's Wan in general, your model or just my workflow that's the issue but something seems to be so overly focused on women's bodies that even my generated men gets pussies above their balls haha

    I'm trying to generate some clips where the sex hasn't started yet and the women look great but I can't for the life of me get consistent penises. Even when prompting details about the penis it only works once in a while at best.

    Does anyone have any tips on what I can do?

    darksidewalker
    Author
    Jan 11, 2026

    Nudifying real people is against ToS on civitai, so if you trying to do that I have to say the model is not finetuned/trained on that. How the model works and what is not possible is clearly written inside my announcement and the front-page.
    You may need a lora to do specific details out of sight, if one is available.

    If it works for generating genitals out of frame, as you realized, is just coincidence.

    https://civitai.com/articles/23271/release-of-dasiwa-wan-22-i2v-tastysin-lightspeed-or-gguf-or-safetensors

    Also this is i2v so what you want to animate should be inside the image. That's how WAN 2.2 i2v works.

    rnorthJan 16, 2026

    I sometimes add the penis lora at 0.2 strength or better to get nicer looking male members generated out of frame. Don't forget to add the trigger word PENISLORA to your prompt:

    high: https://civitai.com/models/1476909?modelVersionId=2284083
    low: https://civitai.com/models/1476909?modelVersionId=2284089

    credit goes to https://civitai.com/user/tazmannner379 for this lora

    HibsMaxJan 13, 2026
    CivitAI

    I appreciate this is a "me" problem, but maybe you can still assist? Recently, I started getting this message:

    CLIPLoader

    'NoneType' object has no attribute 'Params'

    So I tried using your Workflow to see if there was something wrong with mine, but I get the same error. I have tried looking online, but got lost real quick. Do you have any hints?

    The only edits I made to your workflow are:

    1. updated path to VAE,

    2. updated the load image to an image on my file system.

    I have updated ComfyUI and all custom nodes. I am running out of a venv on ubuntu.

    Cheers!!!

    darksidewalker
    Author
    Jan 13, 2026

    I would say, you selected the wrong clip-checkpoint.

    HibsMaxJan 13, 2026

    EDIT: I moved the safetensors files to the checkpoints folder, but I still get this error:

    CLIPLoader

    'NoneType' object has no attribute 'Params'

    I don't know what happened, but all of my workflows that worked, even those loaded from the mp4 meta data, are now giving me this error. Definitely a "me" problem.

    Thanks for the quick reply. I am using these with a Load Diffusion Model node:
    Y:\ComfyUIModels\models\diffusion_models\wan\DasiwaWAN22I2V14BV8V1_tastysinLowV81.safetensors

    Y:\ComfyUIModels\models\diffusion_models\wan\DasiwaWAN22I2V14BV8V1_tastysinHighV81.safetensors

    and this with a Load CLIP node:

    Y:\ComfyUIModels\models\text_encoders\umt5_xxl_fp8_e4m3fn_scaled.safetensors

    and I am using that non-ComfyUI folder for storing/loading models.

    lolbleach001584Jan 17, 2026
    CivitAI

    Hi, I have an RTX 5080. Which model should I use, Q8 GGUF or FP8?

    darksidewalker
    Author
    Jan 17, 2026· 1 reaction

    FP8

    habibingJan 18, 2026· 3 reactions
    CivitAI

    Hey,Great job

    FUTAXLOVAJan 19, 2026· 4 reactions
    CivitAI

    Wow ! great model! Thank you very much!

    ModFrenzyJan 21, 2026· 12 reactions
    CivitAI

    First of all, this is the best WAN 2.2 I2V mix I've tried and I tried most of them, amazing job!

    Secondly, I don't know if it's because you worship to the NSFW gods or that I added sage-attention and Triton to my workflow but Q8 GGUF version of your model works with my 8 GB 3060TI and 32 GB DRAM! I can easily make 960x544 resolution within 4 minutes, wtf?! I can even make 720p videos if I don't mind waiting for 20-25 minutes. This is gold! I will update my workflow to share it with you as well!

    Keep it up, you are the reason I'm able to make smooth NSFW videos!

    Update: Here's the workflow: https://civitai.com/models/2272369?modelVersionId=2619397

    darksidewalker
    Author
    Jan 21, 2026

    Thank you mate! Glad my model works that great for you😊

    ArtanizJan 21, 2026

    can you post your workflow please. Want to give it a shot.

    darksidewalker
    Author
    Jan 22, 2026

    @ModFrenzy What exactly does the workflow implement to use less vram? I can not find anything that would do that, but I'm interested :)
    Some advice, ditch "clip vision h", because this will use more VRAM and WAN 2.2 can not use it, the layers are not trained/missing.

    ModFrenzyJan 22, 2026

    @darksidewalker I have no idea, I was not able to run Q8 versions of other models until I enabled sage attention and patch torch nodes. The only reason I made the workflow is to let people know if they have 8gb VRAM and 32gb DRAM like me, they can still make very good quality NSFW videos.

    I did not know about the clip vision h, gonna remove it from the workflow, thanks for the heads up!

    darksidewalker
    Author
    Jan 22, 2026

    @ModFrenzy Okay, thanks for clarification! :)

    merrickcoxJan 27, 2026
    CivitAI

    I'm running this on a 5060 Ti 16GB and even when I use your official workflow I get checkerboarding or tile lines on my outputs. Too ugly to use for anything. I've tried many things including switching to Linux but the artifacts are completely consistent and ruin my shots! >.< Tested the Q8, Q4 and Q2 quants. I'm sure it's me but I can't tell how!

    darksidewalker
    Author
    Jan 27, 2026

    With 16gb VRAM I would use safetensors. Don't bother with gguf.

    If you use my workflow and got heavy artifacts your comfy may not be installed correctly or your GPU/VRAM is damaged.

    Check you did not overclock your GPU.

    PiranhaPiranhaPiranhaJan 29, 2026· 5 reactions
    CivitAI

    Good Job!

    I've tried a couple of NSFW checkpoints (I2V) over the last few weeks. And this is the best one. The others are sometimes a little better in certain aspects, but this is the best all-rounder by far. Realistic bodies, decent faces, and good prompt recognition.

    Jehuty64Jan 29, 2026
    CivitAI

    Very good job. Should I add SVI_v2_PRO Lora?

    evantopsmithFeb 1, 2026

    I think you need specific SVI workflows for sci lora to work

    ColorfanJan 31, 2026
    CivitAI

    Im not sure what Im doing wrong, but something about this particular model is giving me a hard time. Gen time is extremely slow.

    All I did was swap out the older WAN 2.2 Q_K6 GGUF models I was using with these Q6 models, kept the settings the same, aside from disabling my lightning LORAs. Gen times went from under 3 minutes with my older models to well over 10 minutes with this one. What changed?

    Using a 3090 on SwarmUI.

    darksidewalker
    Author
    Jan 31, 2026

    Maybe your install is not up2date. v8 and higher are fp8+ (mixed precision). 10min would assume it is starting to swap.

    ColorfanFeb 3, 2026

    @darksidewalker Im using the latest SwarmUI, even reinstalled it. Its very odd. The file size of the models are identical between my old models and new this one.

    darksidewalker
    Author
    Feb 3, 2026

    @Colorfan I can not tell you, with comfyui and my workflows the times are consistent and normal.

    jefharrisFeb 6, 2026
    CivitAI

    First run looks promising. Been following your progress. Would love a link list of where you got all those other loras, especially the wan2_2_14b_i2v_sigma_000002100_low_noise Lora. I found the high version but can't seem to find the high version.

    darksidewalker
    Author
    Feb 6, 2026

    No idea what you mean by this.

    jefharrisFeb 9, 2026

    @darksidewalker In your lora stack you have the "wan2_2_14b_i2v_sigma_000002100_high_noise" and the "wan2_2_14b_i2v_sigma_000002100_low_noise" lora. Can't find where you got the "wan2_2_14b_i2v_sigma_000002100_low_noise"

    darksidewalker
    Author
    Feb 9, 2026

    Never heard of that lora, you must be confusing something

    VeegaaFeb 7, 2026
    CivitAI

    Is there like a reliable way to get something like this working on a 5070 12gb VRAM? The LTX stuff looks promising but Iam done trying out stuff and downloading models for hours just to get wheight or tensor errors.

    darksidewalker
    Author
    Feb 7, 2026

    Weight and tensor errors are not from the model files, they are from your setup you installed to use them... maybe your setup is just broken.
    Btw ... LTX is far away from good.

    VeegaaFeb 7, 2026

    @darksidewalker Thx. Recommendation for 12GB VRAM?

    darksidewalker
    Author
    Feb 7, 2026

    @Veegaa q5/6

    MilitAIFeb 27, 2026

    try wan2gp

    sp2758827329Feb 7, 2026· 3 reactions
    CivitAI

    This is the best model I have found so far. Outstanding job. Thanks a lot .You can even add characters to the scene which are not in the initial image! and you don't need any LoRa, it can do all.

    TieFighterPilotFeb 19, 2026· 1 reaction
    CivitAI

    I tested it without loras!
    The end result quality video is awesome!
    Which are the loras that are integrated, do you have a list?
    Another question...
    What happen when using a lora that it is already there?

    darksidewalker
    Author
    Feb 19, 2026

    Thanks!
    For all shared details, refer to my announcements on my page to the models.
    For the lora thing, well, it will add up.

    sourav4068362Feb 22, 2026

    @darksidewalker Brother i have one request to make can you make a workflow for wan 2.2 animate . i tried lot of them but not work perfectly.

    darksidewalker
    Author
    Feb 22, 2026

    @sourav4068362 hey, 👋 since I don't use wan-animate this is unlikely to happen.

    quazaqueFeb 23, 2026
    CivitAI

    i dont get it. it says this is high noise version. so where do i find low noise one?

    darksidewalker
    Author
    Feb 23, 2026· 1 reaction

    In front of you right under the headline.

    lionelelian731Mar 1, 2026
    CivitAI

    Hi, it's a great workflow !

    I'm kind of noob in comfyUI, and in the workflow i don't know how to add loras: the boxes of high and low loras are empties in my workflow (there should be an "add loras" button but no) and i don't know how to add a lora in those boxes. Anyone can tell me how to add a lora?

    darksidewalker
    Author
    Mar 2, 2026

    If the boxes are empty you ether need to install missing custom nodes or your comfyui is broken/outdated.

    ProR2D2Mar 5, 2026
    CivitAI

    The character is always looking at the screen. Prompts do not work. Anything I can do to stop the model from doing this?

    NyxxiNyxMar 5, 2026· 1 reaction
    CivitAI

    Yo. Just started testing this model of yours in the last few days.
    Really good work. I'm getting great results in most cases.
    Will start posting some stuff soon.

    Much appreciated for your effort and releasing this.

    ratamahaeva757Mar 11, 2026
    CivitAI

    help. how to remove these sex movements with the pelvis?

    darksidewalker
    Author
    Mar 12, 2026

    Prompting other movements or using a more recent checkpoint from me.

    matches345Mar 11, 2026· 1 reaction
    CivitAI

    Great work, thanks for your efforts!

    Is there any way to get a Q3 option for this model?

    darksidewalker
    Author
    Mar 12, 2026· 1 reaction

    Thanks, but the model is not maintained anymore. So there will be no other quants.

    I only do new quants for new releases if possible.

    matches345Mar 13, 2026

    @darksidewalker Hey darksidewalker, thanks for the response! It turns out that I'm able to run the Q4 and Q5 as long as I keep a simple workflow.

    jenniferhoustonpancakesApr 7, 2026· 1 reaction

    @matches345 Use MultiGPU and Distorch v2. Expert mode allocations cuda:0,0.5gb;cpu,* . Video is so compute heavy that you can offload almost the entire model with only a few percent performance loss. Then you can easily run Q6 in 12 GB.

    ...or FP8 for a 10-20% performance boost with some quality loss.

    There's really no reason to run GGUF under Q6 for video models.

    aigenre190226Mar 12, 2026
    CivitAI

    AMAZING WORKING PERFECTLY IN A 3060 12 GB!!!! AMAZING WORK

    manmars1850Mar 13, 2026· 3 reactions
    CivitAI

    EXCELLENT MODEL AND INCREDIBLE WORKFLOW. I HAVE NEVER SEEN SUCH DETAILED AND ACCURATE WORKFLOW SINCE READING THE INSTRUCTIONS INSIDE THE ANCIENT ANTICYTHERA MECHANISM. YOU ARE THE FIRST IN 2300 WHOLE YEARS. CONGRATULATIONS, BOTH FOR YOUR MIND AND FOR YOUR WILLINGNESS TO HELP..

    ReGeneratedMar 19, 2026
    CivitAI

    Really good checkpoint, just wondering why you embedded the lighting into the high ? It prevents the removal of lighting on the high sampler to get better motions, do you think it'd be possible to release a high version with no lighting lora ?

    Maybe using the SynthSeduction TrueVision high could do the trick ?

    darksidewalker
    Author
    Mar 19, 2026· 1 reaction

    v10 and v9 both have a TrueVision version without lightning. You can use these.

    ReGeneratedMar 19, 2026

    @darksidewalker Works perfectly, thanks!

    RichenbergMar 23, 2026
    CivitAI

    Can gennerate 1090p Video without upscale?

    darksidewalker
    Author
    Mar 24, 2026

    It is WAN 2.2, so it could. Is this a resolution WAN would want and natively support? - Not.

    InperfectorMar 28, 2026
    CivitAI

    Great model. Is Q6 available without lightning by any chance?

    s_songjiafeng01Mar 30, 2026
    CivitAI

    ValueError: cannot reshape array of size 43990400 into shape (13824,5440)

    使用这个gguf模型报错,unet lorder(gguf)节点读取不了

    请求帮助,谢谢

    That sounds similar to an error I was getting with VideoHelperSuite and animated previews. Try using latent2rgb instead of taesd.

    TheWor1DApr 11, 2026
    CivitAI

    cannot find the node OrchestratorNodeMuter and node manger cannot find comfyui_custom_switch

    New user may probably cannot use this workflow.

    darksidewalker
    Author
    Apr 11, 2026

    This is not a workflow it is a checkpoint.

    jianyuan470575Apr 23, 2026
    CivitAI

    bro, how to run this file. i'm new....

    syphe218Apr 25, 2026
    CivitAI

    ran this on my rtx 6000 pro blackwell, every attempt with different settings i tried came out grainy and diffused.

    darksidewalker
    Author
    Apr 25, 2026

    Sure, if you using WAN 2.2 wrong. That's not a problem with the rtx, please read the usage instructions carefully 👍