This is a direct GGUF conversion of Qwen/Qwen-Image.
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Description
FAQ
Comments (5)
Anyone knows if it works on reforged?
it doesn't, you need to use comfyui for anything that isnt flux, sdxl, or sd1.5
worhflow?
Workflow? its there..
Details
Files
qwenImageGGUF_qwenCLIP.safetensors
Mirrors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwenImage_qwen25Vl7bFp8Scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen2.5-vl-7b-fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwenImageEdit_qwen25Vl7bFp8Scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
text_encoder.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
