A GGUF conversion of this great work https://huggingface.co/Phr00t/WAN2.2-14B-Rapid-AllInOne
simple JSON workflow (you need only gguf loader custom node): https://civarchive.com/api/download/models/2099499?type=Training%20Data
umt5xxl and vae excluded. you can use separated ones from wan 2.1
Description
FAQ
Comments (38)
Clearer than the original AIO version and less noisy. A result much closer to the actual Wan2.2 model. Thank you!
i converted first base model from original author's huggingface repo, i think this model better than his newer versions
i believe only if you share workflow
It indeed is FAR less noisy than AiO v2/v3/v4/v5
But its about on par with v1, which im assuming was the one used to make this GGUF since its linked to it.
That said, its still very nice to have gguf version.
kumarkishank959811 i added link to simple workflow in description
Great gguf version😍 👍🏻
Can you recommend a workflow?
do you use this model for both high and low?
I use this model once, like standard wan 2.1 model
Is this supposed to be faster than the normal GGUF 2.2 or just take up less space?
this not need quality and speed-up loras (more loras connected = more slowdown), because all included. you need only 4 steps for generate. and i removed umt5xxl + wan vae, because more users already has this ones separately. author of original model: https://civitai.com/user/Phr00t_
Is there a Q8 version?
My gpu does handle 1280x720 at Q8 quite well.
Godbless your soul if you made one xD
q4_K_M has a well quality and takes less space and vram, try it first and compare with original fp8 safetensors model https://civitai.com/models/1824594/rapid-wan-22-all-in-one?modelVersionId=2064786
DooFY87 yeah, quality is about on par, but generation time is 100sec faster with same workflow i use. About 240sec now, which is nice xD
what text encoder are you using and clip
text encoder:
https://huggingface.co/city96/umt5-xxl-encoder-gguf/resolve/main/umt5-xxl-encoder-Q8_0.gguf?download=true
clip dont needed
"Hey! First of all, I just have to say—your WAN 2.2 i2v model completely blew me away 😍🔥 The quality and speed are on another level!
I just have one small request… my device is a bit slow, with only 4 GB VRAM 🥹, so the Q4 version runs a little heavy for me.
If possible, could you please make a Q3 GGUF AIO i2v version? That way, people like me with low VRAM could also enjoy your amazing work and create our dream clips 🙏💖
Your work is already gold, but if this small favor happens, it would make life so much better for fans like me! 🫶
hi, converting this model from safetensors to gguf takes a long time on a weak computer. i will do it later, within a week and upload to civitai. and also i am not the author of the original model, here is a link to his profile https://civitai.com/user/Phr00t_
i added q3_k_m version now
DooFY87 Wow!! You’re the best 😍🔥 Thank you so much for making the Q3 GGUF AIO i2v version 🙏💖
Now even my little 4GB VRAM device can handle your magic! Can’t wait to create more with it 🫶✨
Really appreciate the time and effort you put in for the community—this means a lot! ❤️
kumarkishank959811 Wow at 4Gb Vram. And how many Ram you have? What your generation resolution and time for 1 gen?
flo11ok874 16GB Ram , 4GB VRAM - 480x288 , Generation Time for 5 Sec is 70sec Only With All NSFW Loras
kumarkishank959811 Thats amazing for only 4Gb vram and 16Gb ram.
It doesn't generate I2V at all, it gives an error: "KSampler
The size of tensor a (48) must match the size of tensor b (16) at non-singleton dimension 1". In stock devices, there are no nodes (AiO) for GGUF, in general, there are no such nodes, for gguf you definitely need separate clips and text encoders, and vae....
read description carefully. umt5xxl and vae are excluded from this model. you can use separate vae and umt5xxl. i used umt5xxl gguf from this author https://huggingface.co/city96/umt5-xxl-encoder-gguf/resolve/main/umt5-xxl-encoder-Q8_0.gguf?download=true and standard wan 2.1 vae
DooFY87 Thanks for the quick reply, I figured it out, everything works. Sorry I didn't understand right away... , it would be nice to add to the description that you also need a node to load the text decoder in gguf format.
DJKayF all users can load standard umt5xxl text encoder in safetensors format, just change node to correct other
DooFY87 I haven't heard of anything like this, how exactly do I do it, how do I change the node to fix the other one?
DJKayF just replace gguf clip loader with standard "Load CLIP" node, this node integrated in comfyui and change model type to wan in this node settings. later use standard umt5xxl safetensors model
Works perfectly. flf2v-node works fine too.
can you share the FLF2V workflow, im struggling to make it work
can u share the WF plz
@solaiappan I'm not a English user, so I'm not sure "my workflow file" is displayed in English, I hope this file helps you. If it's not in displayed in English, I'm a novice and can't help you 😢
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.