CivArchive
    ๐Ÿ’€๐Ÿ’‹ DaSiWa-WAN 2.2 I2V 14B TastySin v8 | Lightspeed | GGUF ๐Ÿ’‹๐Ÿ’€ - Q6 High
    NSFW

    ๐Ÿ’€๐Ÿ’‹ DaSiWa-WAN 2.2 I2V 14B TastySin | Lightspeed | GGUF ๐Ÿ’‹๐Ÿ’€

    This is a WAN 2.2 Model: You will need one pair of High + Low.

    Version overview: https://civarchive.com/articles/23495/dasiwa-model-versions-and-timeline


    ๐Ÿ”ฎ Key Features:

    • ๐Ÿ”ฅ LoRA-Free Generations
      Generate high-quality videos without stacking Wan 2.2 LoRAs (unless you want adding spacial styles/concepts).

    • โ˜„๏ธFast: 4 step generation

    • ๐Ÿ’ซQuality motions (less slowdowns, no pixelated hyper-motion)

    • ๐Ÿ”ž NSFW and SFW + Extreme versatile (more build in concepts):

      • Enhanced anatomy + poses + framing

      • Better understanding of sexual concepts

    • ๐Ÿช„ Better prompt responsiveness

    • ๐Ÿ‘˜ Better understanding of anime/manga style composition

    • ๐Ÿชก Q8 (FP16 base) precision

    • ๐Ÿšซ Do not use any extra speed-up (low step) LoRAs, this is baked in already

    โœ… Optimisations

    • ๐ŸŒŸ CFGZeroStar patch (better results and prompt adherence)

    • ๐Ÿฐ Baked Latest Distillation (r64-1022)

    • ๐Ÿ€Additional concept optimisation (compared to MidnightFlirt)

    • ๐ŸŽฌ Reward Attention (more realistic movements)

    • โœจ No or extreme low transformation of details with anime/realistic images (lips, eyes, ears, breasts, genitals, ...)

    • ๐ŸŒ  Refined motions (compared to MidnightFlirt)

    • ๐Ÿ’Œ It can guesstimate better details out of frame (compared to MidnightFlirt)

    • ๐Ÿ’ซ As close to RL motion-speeds as I could get with speed-up tech

    • ๐ŸŒEven less tries for good results (compared to MidnightFlirt)

    • ๐ŸงฉEven better compatibility with LoRAs (compared to MidnightFlirt)

    • ๐Ÿ–ผ๏ธ Usable with your preferred/custom CLIP (if compatible)

    • ๐Ÿ˜ตโ€๐Ÿ’ซAdditional reduced hallucinations (compared to MidnightFlirt)

    • ๐Ÿ’  Zero prompt results capable

    • ๐ŸšซExcluded CLIP


    ๐Ÿ’Workflow

    Make sure to checkout my easy to use Workflows!


    ๐Ÿ„LoRA's

    Try first without additional LoRAs!

    But: This checkpoint is not meant to replace all LoRAs, it is meant to:

    • Perform better overall at his own

    • As easy as possible to use

    • With LoRAs to be absolutely awesome


    ๐ŸชงAnnouncement

    โš ๏ธ Read the corresponding announcement.

    ๐Ÿ“ข Make sure to check it out for in-depth information and a complex comparison!


    ๐Ÿ†• New to WAN 2.2 I2V? - Check out my guide.


    • Steps: 4

    • CFG: 1

    • Sampler/Scheduler: Euler/Simple, UniPC_BH2/Simple

    • Resolution up to 720p (native quality).

    • Add other LoRAs with 0.3 - 0.6 at first

    • 16 or 24 fps, 81 or 97 frames ~ 5s


    Dependencies


    ๐Ÿ’ซ Speed + Examples

    Q8 checkpoint - On 16GB VRAM, 64GB RAM, 4 steps, cfg 1, 81 frames

    • 368p: 120 sec

    • 480p: 160 sec

    • 576p: 220 sec

    • 608p: 340 sec

    • 672p: 680 sec

    • 720p: 730 sec

    • Most examples are without any additional LoRAs

    • With LoRAs are for testing the compatibility

    • Initial anime-like example images made by me are also made with my model ๐Ÿ—ก๏ธ๐Ÿ’€ DaSiWa-Illustrious-XL ๐Ÿ’€๐Ÿ—ก๏ธ

    • Other models for realistic reasoning


    ๐Ÿฉป Known issues

    • ๐Ÿซฆ The most delicious sin!

    • Tell me ๐Ÿซต๐Ÿซข


    Approximate expected quality from quantization

    This are my tests compared to a full fp16 safetensor checkpoint taking prompt- and visual satisfaction into account on my DaSiWa checkpoints.

    Quantization - Quality estimation

    • Q8 โญโญโญโญโญ ~ like FP16/FP8+, excellent results

    • Q6 โญโญโญโญ ~ like FP8, very good results

    • Q5 โญโญโญ ~ good to very good results

    • Q4 โญโญ ~ medium-good results

    • Q2 โญ ~ poor results, only use if you have to

    โš ๏ธ Do not compare this with the unofficial quants of my checkpoint made by others, they are based on FP8 and not FP16 like my quants.


    ๐Ÿฉบ Fixes & Feedback

    • If you use LoRAs, try to respect the LoRA training triggers and try some versatile descriptions, most LoRAs will work with 0.3-0.6 (start with 0.3)

    • Raise LoRAs in little steps +0.1

    • Do not mass add LoRAs, just add 1 or 2 (x2 High+Low)

    • Negative prompting do not work with cfg 1, that's a limitation of speed-ups with cfg 1 (except you use NAG)

    • Low resolution (e.g.384x576) are for fast samples and will blur fine details, do a higher resolution if you want clear details

    ๐Ÿชงโ— Test your comfyui-backend with this absolute basic test-workflow before asking about errors.


    ๐Ÿ–ค Why I Made This

    I was tired of using all these massive list of LoRAs, just to get a remotely good result after 10 generations, consuming hours of time.
    So I can just make my videos with 1 or 2 concept LoRAs without pushing 6 till 10 LoRAs (Low/High) into a generation.


    This checkpoint is also my personal playground.


    Closing words

    ๐Ÿคฉ I want to thank all the fantastic other creators who made super nice LoRAs and concepts to play with! Support that awesome creators by using their LoRAs and post to their gallery and share the meta-data!
    โš ๏ธ I made all this with permissions or open-source resources (the time it is incorporated).

    I share as much insights as I can without compromising my work. I'm doing this for fun as my hobby and just do not want my hobby to be destroyed.

    More details can be obtained in the corresponding announcements!


    If you would like to contribute in my awesome (๐Ÿ˜‰) checkpoint or willing to share resources I'll gladly give credit! Just contact me!

    โœ… All credits / resources are mentioned inside the announcements! - Since different versions may have different resources.


    YOU are responsible for outputs as always! If you make ToS violating content and I get aware I WILL report this.


    Disclaimer

    This models are shared without warranties and with the condition that it is used in a lawful and responsible way. I do not support or take responsibility for illegal, harmful, or harassing uses. By downloading or using it, you accept that you are solely responsible for how it is used.


    Custom License Addendum: Distribution Restriction

    Notice: Notwithstanding the base license selected for this model, the following restrictive terms apply:

    1. No Redistribution: You are not permitted to host, mirror, or redistribute this model (checkpoint, LoRA, or Safetensors files) on any other platform, website, or service (including but not limited to Hugging Face, Tensor.art, or SeaArt) without explicit written permission from the creator.

    2. Attribution & Source: This model is officially maintained only on Civitai or other platforms where I explicitly own the repository. To ensure users receive the correct version, updates, and safety metadata, please point users to the original URL.

    3. Usage: All other rights regarding the use of the model for image generation remain as per the terms and the restrictions provided per model.

    Description

    Q6 High

    FAQ

    Comments (14)

    redlucario1735Dec 6, 2025ยท 4 reactions
    CivitAI

    Honestly the best checkpoint i've used, I've tried virtually every single one on this website and the best for me was always smoothmix Q8 (5080, cant do the 20gb one), but i like this one even more, for anime style this one wins every single time, for IRL content theyre more evenly matched but i still lean towards yours. GG man, awesome checkpoint.

    darksidewalker
    Author
    Dec 6, 2025

    Thanks ๐Ÿ‘ Glad you like it!

    waifusynthlabsDec 7, 2025ยท 2 reactions
    CivitAI

    awesome version! everything you said are on point. Is there a way to produce more movement though? I find it a bit stiff. more steps maybe?

    darksidewalker
    Author
    Dec 7, 2025ยท 1 reaction

    You mean more motion? The following can provide more motion:

    - More steps in high

    - More fps + frames

    - LoRAs for high motion

    - Setting a higher shift value

    - Samplers that add motion (like painterI2V, but they can bug out the videos)

    General guidelines can be read here: https://civitai.com/articles/20293/darksidewalkers-wan-22-14b-i2v-usage-guide-definitive-edition

    waifusynthlabsDec 7, 2025

    @darksidewalkerย Ah yes like more motions. I'll try loras and increasing the steps. I already have shift 8 at the moment. Yeah painteri2v is nice but the quality degrades. I generated 7-8s vids with my wf anything more than that I get looping animation

    darksidewalker
    Author
    Dec 7, 2025

    @waifusynthlabsย that is normal every WAN model will loop back, because WAN is trained on 5s and than starts to loop a bit more back with every second.

    kovila77Dec 8, 2025

    Could you give some examples of loras for high motion?
    Also, sometimes I mix LOW version of some another model with HIGH version of this or midnight flirt to get more motion. Results vary greatly from run to run.

    hazzoom82659Dec 9, 2025

    @darksidewalkerย Hey Dark, speaking of the loop back & 5s , your WAN2.2 MOD models can make perfect 6 seconds videos (sometimes 7s BUT with a bit of Luck with the Seeds & the output dimensions width x height) , your 1st LightSpeed WAN2.1 MOD model could easy go up to 8 or even 10 seconds of good generation without looping .

    darksidewalker
    Author
    Dec 9, 2025ยท 1 reaction

    @hazzoom82659ย True. WAN 2.1 was better at longer videos, while WAN 2.2 tends to make looping back. But I'm not aware of a tech that I could implement to make WAN 2.2 generate longer videos, except retraining the complete model for it. But most loras also only trained for 1-5 sec, so there will almost no benefit from that.

    kovila77Dec 13, 2025

    @darksidewalkerย ย at the topic of long videos, what about SVI? https://github.com/vita-epfl/Stable-Video-Infinity

    I am not sure the actual goal of SVI but anyway tried it. Using official workflow for wan2.2 (https://github.com/vita-epfl/Stable-Video-Infinity/tree/svi_wan22/comfyui_workflow) it generates something, but it seems that it just first to last frame workflow. Results seems have less freeze between 5s clips, but also less motion.
    If I try increase overlap of clips to more than 1 frame, there is degradation and seam appearing, at least from not official WAN2.2 checkpoints.
    Found a video with SVI (https://youtu.be/GmKBZkqIsJE), in which single clip of 161 frame was generating using Qwen3-VL as prompter to give movements description by seconds. Not tried this, as it seems not practical for very long videos (maybe I`ll try it sometime)

    waifusynthlabsDec 13, 2025

    @kovila77ย I did find a work around. You can force the model to follow your prompt. Using dynamic prompting (at 0-2s): prompt and so on.

    7-8 seconds is perfect. I was able to do a couple of 10 seconds but depends on the seed or the prompt itself maybe. Chain 3 ksamplers with the first one at 2.5cfg to 3.5 cfg for the high model 1-2 steps, 2nd ksampler for resampling high model from the first ksampler and finally low model on the 3rd ksampler. 2-2-2 steps. Use res4lyf ksamplers or there is a triple ksampler node for this in comfyui that does all this in one node but I like the res4lyf ones cause of the different sampling method.

    kovila77Dec 13, 2025

    @waifusynthlabsย As I said above. One long single clip will not make minute long video because there is not enough memory to do so with good resolution and good speed. That`s why we need a way to combine multiple clips some way.

    About 3 KSamplers, there is too many parameters to change, so without workflow, screenshot or video, there is no way I make it right. I already tried to combine multiple KSamplers with no luck.

    Also, when using speed loras (they merged in this model) I read we need to use cfg of 1 to prevent quality degradation. So, it seems strange that this setup works.

    darksidewalker
    Author
    Dec 14, 2025ยท 3 reactions

    @kovila77ย The guy in the YT video has many flaws inside the workflow and seem not to understand some basic functionalities.
    1# WAN 2.2 can not use clip_vision, so that part is just useless and wrong
    2# You can not inject text per second, the clip completely translates the text into tensors and feeds the process, their is no way to inject something here
    3# overwhelming long prompts will not always make the video better
    4# The qwen-vl node he use, is just a prompt enhancer by analysing the image
    5# there is more ... but I'll not bother to explain all that here, sorry.


    The SVI lora seem to stabilise longer videos in sacrifice for motion, but you need the resources to generate the long videos non the less, what will not happen with WAN 2.2 on consumer hardware.

    - Very much snake oil, imho

    kovila77Dec 14, 2025ยท 1 reaction

    @darksidewalkerย didn't know that clip_vision is useless in wan2.2, thanks.

    2 About timecodes in video, I see it`s not working (https://www.reddit.com/r/StableDiffusion/comments/1m0midf/wan_vace_t2v_accept_time_with_actions_in_the/), but I thought what if they did that in SVI. In the end I think they didn't.

    4 all model that I tried for prompt optimization give mediocre or bad prompts (either for T2I or I2V). Too much time to fix prompt, it easier and better to write by hand with some text/tags help from LLM.

    I have another question, if you dont mind. Is it possible to make lora or something from difference of model? I see Lightx2v uses this for speed lora creation, if I understood correctly. But can we, for example, take wan2.1 move and create lora for LOW wan2.2 to control movement?

    Checkpoint
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    2,928
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/3/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    DasiwaWAN22I2V14BTastysinV8_q6High.gguf