I'm offering you a first version of my workflow adapted to WAN2.2. There's still a lot to improve, so don't hesitate to give me feedback.
Resources you need:
📂Files :
For base version
T2V Model : wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors and wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors
In models/diffusion_models
For GGUF version
T2V Quant Model : wan2.2_t2v_high_noise_14B_QX.gguf and wan2.2_t2v_low_noise_14B_QX.gguf
In models/unet
For lightx2V version
lightx2V LoRA : Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors
CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
in models/clip
VAE: wan_2.1_vae.safetensors
in models/vae
ANY upscale model:
Realistic : RealESRGAN_x4plus.pth
Anime : RealESRGAN_x4plus_anime_6B.pth
in models/upscale_models
📦Custom Nodes :

Description
New frame rate slider,
Speed LoRA correction.
FAQ
Comments (9)
There's an issue with the T2V workflow(v1.1): The "Frame rate" node's value is not connected to the "calculFrames" node.
Thanks maybe a bad copy from the gguf version, fixed now
After my testing, I found that the video quality from Kijai's nodes and models is better (referring to the default workflow in the Kijai repository). However, I really like the features of the workflow you provided. Could you create a version of your workflow using Kijai nodes?
I've replicated a simplified Kijai version of the UmeAiRT workflow.
https://civitai.com/images/92147686
Its something mixed in your workflow. TXT to VIDEO (gguf).json is actually first frame last frame
Ayy. Think the wrong file was uploaded. At least for the base GGUF file
Maybe comfyui overwrite my files, i change it when i'm back home





