A GGUF conversion of this great work https://huggingface.co/Phr00t/WAN2.2-14B-Rapid-AllInOne
simple JSON workflow (you need only gguf loader custom node): https://civarchive.com/api/download/models/2099499?type=Training%20Data
umt5xxl and vae excluded. you can use separated ones from wan 2.1
Description
FAQ
Comments (60)
my updated comfyui doesn't have sa_solver, where to get it? im lost at googling it
Will you be converting the NSFW versions of Phr00t's models? Would be great to have!
Many gguf (also t2v) here - https://huggingface.co/befox/WAN2.2-14B-Rapid-AllInOne-GGUF/tree/main
Are these the NSFW versions of the models?
nsfw models, and t2v models and many versions like q2 or q6 someone make here /he still adding more and more versions/ - https://huggingface.co/befox/WAN2.2-14B-Rapid-AllInOne-GGUF/tree/main
@flo11ok874 Thanks!
this does not have two pieces of High and Low ?
Yea, that's the point.
Which vae and text encoder should I use?
standard vae: wan2.1 vae -> https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true and text encoder of your choise for exemple https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main -> umt5-xxl-encoder-Q4_K_M.gguf
@flo11ok874 i'm running out of OOM every time, even tho all files are still smaller than my total ram + vram, 28GB/12GB
@MacumBX13 Probably too high resolution, lengt, frame. Start with 480x720, 81, 16fps.
plz add mega to gguf
Mega already in development here - https://huggingface.co/befox/WAN2.2-14B-Rapid-AllInOne-GGUF/tree/main
@flo11ok874 thx, you are so nice!
Thank you for your efforts, Please add MEGA also (GGUF)
Mega already in development - https://huggingface.co/befox/WAN2.2-14B-Rapid-AllInOne-GGUF/tree/main
Thank you very much!
what are the cfg and steps recommended?
only like JSON workflow said
bro can you tell me whats inside mega version gguf ?
Mega is Vace
hey thanks man for GGUF models, keep up the good work, but i have question. this is my first time using RapidAIO models, i just downloaded Mega-v1 q6, so i am getting only random video nothing close to my given image, so is the model t2v only and it is NSFW or SFW, if SFW can it still generate NSFW or not
You have to use a vace workflow even to do typical i2v with the mega model. The example is on the original creator's huggingface page.
for me the same happens.... my input picture would ignored.... i use this workflow. can please anybody help me? Phr00t/WAN2.2-14B-Rapid-AllInOne at main
When I make a video in comfyui and write promt - for example, a girl gets up and leaves the frame behind the camera, the workflow does the following - the girl gets up, makes movements and comes back to the initial frame, i.e. sits down again and does not go off-screen. the question is, why shouldn't the workflow be promt, maybe some node is missing? or is the model itself so well-trained?
You probably have first-frame and last-frame inputs, or a FLF or workflow setup. Using a workflow that just starts with the initial frame should fix this.
After testing, nah... MEGA is mega garbage for decent I2V. It's lost its NSFW adherence (yes, I used the proper workflow), and my characters barely move. Using the v10 (both versions) produces amazing fluid results and it really makes images come to life. MEGA is only good for T2V.
@rivdemon1221554 100% v10 produces amazing motion. I Appreciate your insight, thank you for posting!
How can I get the custom node to load the all in one model? All GGUF Loaders I find for comfy have no output for VAE & CLIP
Why don't you use workflow from description? vae and clip have seprete nodes - load vae and load clip (text encoder).
You need to load them seperately. just use the standard load vae and load clip nodes.
:c porque siempre me salen censuradas de imagen a video ?
ponle loras sino sale muy raro todo, aun asi es raro porque wan de por si no tiene censura. si quieres te ayudo
Is there any recommended download link for umt5xxl and vae?
both are in the model already
Is it possible to have a workflow that includes any extender and lora for this? I think I have a rough idea how to add the multiple lora, but not the extender.
Just came here to ask the same question. Is it possible to use this model with any lora at all? How do implement workflow?
In case anyone here is using Wan2GP you can find the fork here with support for this model and its NSFW sibling https://github.com/Gunther-Schulz/Wan2GP/tree/megaonly
How do I install this model to this forked Wan2GP? It does not come up in the default models list.
I tried version 9, it works well, now I will try version 10.
this model doesn't have low and high model version.. or just solo model?
Only 1 model, 1 sampler.
@flo11ok874 ah i see.. thanks for info
Use umt5_fp16 rather than fp8 or you'll recieve an error.
Can I use it for both high and low noise? I don't see separated versions listed
Yes, the AIO model includes high and low. Follow the links and read the model page, it includes info on how to setup a workflow (with pictures).
@PassiveImp No, low only
Hi! Thanks for great job. Could you spend time to add Q2 version for 8Gb VRAM ? Please!
You should be able to fit Q3 even Q4 in 8GB. At least on Comfy the program automatically offload to RAM parts of the model.
You still need enough RAM for this to work, i would say that at the very least 16GB with a extremely optimized version of Windows 10/11. Or just use Linux.
So this only takes Wan2.2 Low loras? Doesn't that make this Wan 2.1, which also takes Low loras?
Is there any way to generate videos with first and last frame? i tried using workflows for the Phr00t rapid aio but they don't work with the gguf models.
Good day. Is it possible to make the original video longer than 2 seconds?
Just change the lenght
@cyberyisus Thank you!
Hello. It gives fuzzy image.
What is the benefit to using this gguf over the wan model its based on?
Fit the model on lower VRAM cards
It seems it is not compatible with the newly released VBVR Lora. I tried but the human movements were like robots, they even glided on the ground instead of walking. lol.