This GGUF file is a direct conversion of Wan-AI/Wan2.2-I2V-A14B / https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main
Consider using this with the Lightning-LoRAs for speed up (just read the description carefully!)
Just the basic WAN 2.2 as GGUF - No support.
YOU are responsible for outputs as always! If you make ToS violating content and I get aware I WILL report this.
Disclaimer
This models are shared without warranties and with the condition that it is used in a lawful and responsible way. I do not support or take responsibility for illegal, harmful, or harassing uses. By downloading or using it, you accept that you are solely responsible for how it is used.
Description
FAQ
Comments (22)
Are you planning to add the same for Low I2V? And for both of the T2V models?
THank you for uploading this I can't wait to try it!
WARNING- node flow2-wan-video is preventing i2v workflows from working with wan2.2. Use the manager to delete this node from comfyUI.
Would this work like the other wan2 models on a 12gb nvidia card? I have plenty of system memory, just not sure what that would look like with 2x12gb+ models used like this.
Great, waiting for Low Noise ver ))
I've been getting this one specific error using these models with the basic hi-lo workflow that's going around. Anyone have a clue what might be the culprit?
Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 32, 12, 144, 78] to have 36 channels, but got 32 channels instead
Hi, is it really necessary to have a low and high model or will there be a one model only like 2.1 Lightspeed with 4 steps?
Which one is good for RTX 3060ti? What will be the maximum resolution I can do? please suggest. I am spinning around on youtube and at CivitAi.
My first Music Video using WAN 2.2 instead of Runway. It's very capable!
Would someone kindly explain the difference between the 14B and A14B models?
WAN 2.2 High and Low | How to add new LoRAs in ComfyUI with Wan 2.2
why don't you indicate the amount of video memory required in the description. is it ready for 16gb vram?
Where is "Reference CFG" in SwarmUI? Am I blind? I don't see it anywhere.
I do have one question though: I’m running the wan 2.2 GUFF 480p model with a Lightning LoRA (4 steps) to speed things up — a 6-second video takes about 60–70 seconds on an Nvidia H100. The main problem I’m facing is that facial features, especially the eyes, tend to drift or “melt” during generation.
Do you think there’s a good way to stabilize this? I was even thinking about training a dedicated Low LoRA focused on faces/eyes to keep them sharper.
The output video is highly pixelated, any idea what causes that? I am using your workflow wan22A14BHighLowPreset_comfyuiBasic2LoraV11.
trying to download the low noise version, from last 2 days, did multiple tries... is it only me? or is this stuck? any other source with the exact same lownoise model ? hugging face?
Just getting into Wan for the first time, and when I look at example images it seems like they are nearly always made using i2v and not t2v. Is wan t2v just not very good? I have had pretty poor results myself. My question then, is what are people using to generate the initial image used for the i2v?
Can this load with a rtx 4060?
works, but first second im getting a clean video, after that the video becomes extremely blurry then ends.
Sorry to bother you, I used this gguf, and the video show very severe ghosting and blurriness. I don't know why this is happening. I am using your workflow 'fastfidelity' and haven't changed anything, just added the unet loader to load this model. Using 'WAN 2.2 I2V 14B Lightspeed' works fine. Could you please tell me what parameters or settings I need to change to make it work properly?
IS THE LOW ONE GOOD FOR 8GB VRAM??
RTX 4060 TI
Can you recommend a workflow for this, please?
Details
Files
wan22I2VA14BGGUF_q8A14BLow.gguf
Mirrors
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-LowNoise-14B-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
wan22I2VA14BGGUF_a14bLow.gguf
wan2.2_i2v_low_noise_14B_Q8_0.gguf
Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf