condensed/subgraphed workflow for Qwen Image Edit 2511!
IMPORTANT!!!
model, text encoder, and vae are all are subgraphed into 1 node for accessibility and condensation of the workflow. YOU MUST ACTIVATE YOUR MODEL FILES IN THE SUBGRAPH OR THE WORKFLOW WILL NOT RUN!!!!!!!!
FOR INPUT IMAGES ABOVE 1024x1024, use the imagescaletototalpixels node under the image input nodes to resize the image to 1 megapixel. if your image is too big, the model cannot properly generate your edit as it will count the input image as a dedicated layer and place it in the center of your generation.
lora speeds up generation time:
with LoRA - 4-8 steps (8 for quality)
without LoRA - 20-40 steps (40 for quality)
GGUF Model
https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF/tree/main
Text Encoder
8-Step LoRA
https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning/tree/main
VAE
(2511-vae must not be comfy-native yet because i got a ton of errors when trying to use it, so i swapped it out for the previous qwen image vae which seems to work great!)
Description
v2 brings in the native comfy nodes for the workflow. still subgraphed and condensed.
FAQ
Comments (7)
Subjects are still squished. It does not know how to draw tall thin people with small waists, even if you provide a reference image. This is a recurring problem with all versions of Qwen and there still aren't any LoRAs for that. Stick with Flux UMO/Flux.2; either will still require prompting (absurdly tall, absurdly small waist, absurdly long hair) to avoid reversion to the mean.
Overall 2509 and 2511 are far inferior to the original Image and Edit.
It's also really slow. 4x generation time of Flux.2, now that it has its own 4 step LoRA.
i highly agree. couldnt have said it better. honestly i just posted this for the people who need it, i NEVER use Qwen. EVER lol
@realrebelai Originals are excellent at inpainting NSFW with LoRAs, Nunchaku provides great speeds. All versions are poor at generating characters though.
i do believe they fixed this model with the new workflow i just uploaded for it, its the native comfy workflow. corrected the saturation and texture issue present in the last one and also the 4step lora works as intended now. little to no degregation! test it out
Is the GGUF Q4 works on 8vram setup?
for that get a Q2 from https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF/tree/main
Using Q4 with 8GB VRAM (RTX 3070TI mobile) and 16gb ram it works.


