π FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8
A Good Reference for Parameters
Canny:
controlnet_conditioning_scale=0.7,control_guidance_end=0.8Depth: use
depth-anything,controlnet_conditioning_scale=0.8,control_guidance_end=0.8Pose: use
DWPose,controlnet_conditioning_scale=0.9,control_guidance_end=0.65Gray: use
Color,controlnet_conditioning_scale=0.9,control_guidance_end=0.8
Folder Structure
Organize your models as follows for FLUX dev and ControlNet workflows:
π ComfyUI/
βββ π models/
β βββ π diffusion_models/
β β βββ π flux-dev.safetensores # (or gguf)
β βββ π text_encoders/
β β βββ π clip_l.safetensors
β β βββ π t5xxl_fp8_e4m3fn.safetensors # (or t5xxl_fp16 or t5xxl_fp8_e4m3fn_scaled)
β βββ π vae/
β β βββ π ae.safetensors
β βββ π controlnet/
β β βββ π FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8.safetensors
Note: Only one T5XXL text encoder is neededβchoose based on your hardware and quality/speed needs.
My FP8 Quantization Solution
With modest coding experience, I researched quantization and implemented FP8 compression for the model. The quantized version works perfectly for my needs, enabling all ControlNet workflows with much lower memory requirements and no noticeable quality loss.
Using The Quantized Model
Supports all original control types: pose, depth, canny edge, etc.
Drop any reference image, select control type, and generate results with lower memory usage.
Enhanced Prompting with OllamaGemini
I use my customOllamaGemini node for ComfyUIto generate optimal prompts. This, combined with the quantized model, creates a powerful, memory-efficient pipeline for creative image manipulation.
Alternatives for High-End Hardware
If you have a powerful GPU, the original unquantized model from Shakker-Labs offers higher fidelity at the cost of increased memory usage.
Looking Forward
I welcome community feedback! If you find these workflows helpful, please show your support with a π on the project. I'm open to opportunities and appreciate encouragement as I develop these resources.
Feel free to experiment with the model for your creative projectsβwhether using the memory-efficient quantized version or the original full-precision implementation!
π¨βπ» Developer Information
This guide was created by Abdallah Al-Swaiti:
For additional tools and updates, check out my other repositories.
Description
FAQ
Comments (18)
works great! thanks a lot :)
I've tried your quant and it's great to have it as an option. It works great... but comparing it to the old union pro (non quantized), the old one almost always performs better. I have not tried yet this full fp16 2.0 one yet to compare. Anyway thanks!
ur opinion, i found this better and have the same result of original fp16 , wish u enjoy as i do, thanks
@AbdallahAlswa80Β I will try it more, maybe it was just some bad examples or seeds.
I can drag the images from here into ComfyUI and they load and I am able to run the workflow, but if I try to save the workflow, and reopen, it does not open. I tried it multiple times, and the same thing happens. Also, I noticed there were different workflows for different images. I was not sure if that was on purpose or not.
try export , also i'm work on latest version of comfyui
@AbdallahAlswa80Β Yeah, I am using the nightly version of ComfyUI with mostly updated custom_nodes. I will try to export, I had not thought of that. I ran the file I saved in ComfyUI through jq in case there was something in the semantics, but it did not help, as I still could not open the workflow. I have never come across this weirdness with ComfyUI before.
Working on forge ?
sure
@AbdallahAlswa80Β how do you get it to work in Forge ? I get the canny lines for example but then the render doesn't look into that, it just ignores it
How to use it in Forge UI? I have put the file in the ControlNet folder, but the ControlNet does not appear in the Forge interface.
for forge it's simple just put the model inside the controlnet folder :) and then use it into integrated controlnet modul in forge :)
@AbdallahAlswa80Β After placing the model in the controlnet folder, I enabled him in forgeUI, but the generated results don't look like controlnet is working.
@EKKIVOKΒ no it's being seen, get the checked box a "working" , but doesn't respect the canny at all.
https://huggingface.co/ABDALLALSWAITI/FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8 The option to recolor persons is described here, does this require the same workflow?
reminder to anyone if you use this, this is a controlnet and not a checkpoint. put it in controlnet folder and it work well with even schenell .
Thanks, solved my out of RAM problem
OMG. if you want to make this work, you have to abuse the SetUnionControlNetType node's type value as described in https://civitai.com/models/709352/flux-union-controlnet-pro-workflow :
canny - "openpose"
tile - "depth"
depth - "hed/pidi/scribble/ted"
blur - "canny/lineart/anime_lineart/mlsd"
pose - "normal"
gray - "segment"
low quality - "tile"
It also doesn't work at all with Flux Lite (which I tried because this controlnet takes a large chunk of my VRAM).
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


