This is direct convert from WAN 2.2 FP16 base checkpoint to NVFP4, using https://github.com/tritant/ComfyUI_Kitchen_nvfp4_Converter
About 2x speedup on Blackwell GPUs. No speedup on older generation gpus, but usable. With a lot of connected LORAs this checkpoint faster than Q4 GGUFs.
LATEST COMFY AND TORCH WITH CUDA 13.x SUPPORT REQUIRED. ON OLDER CUDA VERSIONS THIS MODEL MORE SLOWER THAN GGUFs AND MAY DEGRADE QUALITY OR GENERATES BLACK OUTPUTS.
Description
This is direct convert from WAN 2.2 FP16 base checkpoint (https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp16.safetensors) to nvfp4 using https://github.com/tritant/ComfyUI_Kitchen_nvfp4_Converter
About 2x speedup on Blackwell GPUs. No speedup on older generation gpus, but usable. With a lot of connected LORAs this checkpoint faster than Q4 GGUFs.
(HIGHLY RECOMMENDED LATEST COMFY AND TORCH WITH CUDA 13.x SUPPORT, ON OLDER CUDA VERSIONS THIS MODEL MORE SLOWER THAN GGUFs)