ComfyUI workflow for the Union Controlnet Pro from InstantX / Shakker Labs.
Available modes: Depth / Pose / Canny / Tile / Blur / Grayscale / Low quality
Instructions:
Update ComfyUI to the latest version
Download the model to models/controlnet
Load this workflow
Select the correct mode from the SetUnionControlNetType node (above the controlnet loader)
Important: currently need to use this exact mapping to work with the new Union model:canny - "openpose"
tile - "depth"
depth - "hed/pidi/scribble/ted"
blur - "canny/lineart/anime_lineart/mlsd"
pose - "normal"
gray - "segment"
low quality - "tile"
So for example, to use the depth mode you would choose "hed/pidi/scribble/ted" in the SetUnionControlNetType node.Adjust the strength and end_percent in the ControlNetApply node to find a suitable amount of influence. The sample values in the workflow (0.4 and 0.6) are quite gentle, you may want to increase them to have a stronger influence on the image. The model developer has given a recommended range of 0.3-0.8 for strength.
The archive also contains the openpose image used to generate the example image and the original photo the pose was extracted from.
Optional: if you want to extract input information from your own images, install the "ControlNet Auxiliary Preprocessors" custom node.
For openpose you can then enable the openpose extractor node (ctrl+b or right click -> Bypass).
Auxiliary preprocessors also include extractors for other modes such as depth and canny which you can put in place of the openpose image (remember to set the correct mode afterwards).
Description
FAQ
Comments (32)
Thanks! I checked it out, works great, would be great to add Lora.
You should be able to add a LoRA simply by adding a "Load LoRA" node after the checkpoint loader and re-routing the "model" and "CLIP" noodles to go through the "Load LoRA" node.
May I ask how to install this controlnet ?
Sorry but the instructions are complete nonsense, hard to follow :
canny - "openpose"
tile - "depth"
depth - "hed/pidi/scribble/ted"
blur - "canny/lineart/anime_lineart/mlsd"
pose - "normal"
gray - "segment"
low quality - "tile"
...what!? Like, you need to use Openpose preprocessor for Canny mode ? I don't get it
It is needed to use this mapping until ComfyUI updates the Set Union Controlnet Type node to support Flux Union. So the workaround I described is a bit experimental and technical to follow at this point. Sorry, but the situation is what it is at the moment.
If you find it too confusing you can use my older loader node that includes simple to follow labels, using the alpha version of the InstantX union controlnet, with a bit lower quality: https://www.reddit.com/r/StableDiffusion/comments/1euz2a9/comment/lio2fte/
@eesahe Honestly, everything gets confusing with Flux. The lora making is different, the "official" loras or controlnet are crap, some controlnets works some don't... When I wrote this, just 1 day later I noticed on your Github a mention saying that it's now deprecated because ComfyUI supports InstantX controlnet... So I updated, then in some case your node gave me better results so I kept both methods, now it doesn't work anymore, I'm so lost haha
good job
Can you image how complex it is to use a simple ControlNet model on ComfyUI?...There it is.
please tell me what is "new" controlnet model, thx
I'm getting this error when load the controlnet anyone had any luck? MMDiT.__init__() got an unexpected keyword argument 'image_mode
replace the controlnet load for "InstantX Flux Union ControlNet Loader". I had the same error and after I replaced the node it worked.
https://www.youtube.com/watch?v=DVZikxmIbo0
@maxdreams Hello, I don't recommend to use my "InstantX Flux Union Controlnet Loader" node anymore, since it does not fully support the latest versions of the union models (there will be artifacts).
I believe this described error message may instead be resolved by updating ComfyUI to the latest version.
Fantastic! First time I've been able to run ControlNet with Flux! Well done!
i got error, "samplercustomadvanced, Allocation on device" does anyone have this experience? how to fix it
Why it seems not working? Should we use the raw image and choose normal mode? Or we should pass the processed openpose img and choose Openpose mode? I've tried both but all failed.
"Working" without errors, but is really bad. Doesn't follow the controlnets at all :/
I can't see the previews... And that kills me... specially considering the new naming... which is absurd btw
This works beautifully! Thank you very much!!
To everyone saying otherwise, you must make sure you have installed the required nodes, models and clips, it indicates upon first loading.
I can get depth, hed, canny to work, but pose doesn't work for me. I'm using a GGUF model, is this a problem ?
hi! did you find a solution? i want to try gguf q4 with controlnet, and i am not so good with comfyui lol
I'm trying with Q5 and the FP8 version, and it seems to completely ignore the openpose image.
No matter what I do, it simply doesn't put in the same pose as the image
Works great but the workflow in the zip file is setup incorrectly. With the control pose output group bypassed the openpose image is not passed to the decoder so you just get a copy of the sample image. If you change the input image to be the openpose output then you get the correct generated image.
.. it seems, that for every controlnet type we need a corresponding controlnet_aux. if so, the 'optional: extract openpose from image' group is not optional but necessary if we use our own pose-pictures and only works with the type 'normal' which is in fact the type 'pose' :-)
Cheers for that. I was struggling to get the Union model to work. Also, Kijai has added a node to select the correct union type without the workaround. https://github.com/kijai/ComfyUI-KJNodes Set Shakker Labs Union Control Net Type node
Does anyone have this workflow with inpainting?
#AwesomeSauce
can't get past this and have attempted all variations of clips Prompt outputs failed validation CLIPVisionLoader: - Value not in list: clip_name: 'sigclip_vision_patch14_384.safetensors' not in ['CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors', 'CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors', 'clip-vit-large-patch14-336.bin', 'clip-vit-large-patch14.bin', 'clip_h.pth', 'clip_vision_g.safetensors', 'google--siglip-so400m-patch14-384\\model.safetensors']
open pose failed, i think its work because of the prompt that you give
Open pose is ok (i guess, didn't try) as long as you have one pose in it. if you are thinking of making character sheets with it, just forget it because it produces garbage )nothing against OP, it's the controlnet itself) SD 1.5 is still the only one that can produce a whole character sheet from a pose image with multiple poses. kind of mind boggling that so many years later still no decent controlnet for anything other than 1.5
Hello, it's been some time but did you find any better way to create character sheets?
Canny, Depth, Pose and Gray seem to work fine, but I can't get either Tile, Blur or Low Quality to work, which makes this worthless for upscaling. Also, don't bother using Flux Lite with this, it doesn't work.

