NEW 2vXpSwA7: anytest-v4 | openpose-v2_1 || abovzv: segment || bdsqlsz: canny | depth | lineart-anime | mlsdv2 | normal | normal-dsine | openpose | recolor | segment | segmentv2 | sketch | softedge | t2i-color-shuffle | tile-anime-α | tile-anime-β | tile-real || BRIA AI: bg-gen | canny | colorgrid | depth | NEW fill | openpose | recolor || CVL-Heidelberg: canny | depth || destitech: inpaint | inpaintv2 || diffusers: canny small | mid | full | depth small | mid | full | zoe || EcomXL: inpaint | softedge || Eugeoter: NEW noobai canny | depth | lineart-anime | lineart-real | mangaline | normal | scribble-pidi | scribble-hed | softedge-hed | tile | sdxl anime-canny | vidit-depth || h94: ip-adapter | ipa-vith | ipa+ | ipa+face || Hetaneko: pony canny | cannyv2 | color | depth | replicate | replicatev2 || HighCWu: canny-v3 || huchenlei: PuLID || Kataragi: canny | flatline | flatline-lora | inpaint | NEW ipa | line2color | line2color-lora | lineart | lineart-lora | NEW noob-ipa | recolor | recolor-lora | NEW rough-coating | tori29-blur | xdog-sketch || kohya-ss: real blur | canny | depth | anime blur | blur-beta | canny | depth | openpose | openposev2 | replicate | replicatev2 | scribble || PromeAI: lineart || ShermanG lineart || Stability.ai: canny | depth | recolor | revision | sketch || SargeZT: depth-16b-zoe | depth-faid-vidit | depth-zeed | depth-zoe | softedge | t2i-adapters binary | canny | color | depth | segmentation | softedge || TencentARC: canny | depth-midas | depth-zoe | lineart | openpose | recolor | sketch || TheMistoAI: mistoline | mistoline-lora || thibaud: openpose | openpose-lora || ttplanet: tile-real | tile-realv2 || NEW windsingai: pose | tile | tile-10w || xinsir: canny | cannyv2 | depth | openpose | openpose-twins | scribble | scribble-anime | tile | union | union-promax
ControlNetXL (CNXL) - A collection of Controlnet models for SDXL
(13.01.2025 - First NoobAI controlnets uploaded by Eugeoter)
(12.01.2025 - First Illustrious controlnets uploaded: windsingai-pose & -tile)
This collection strives to create a convenient download location of all currently available Controlnet models for SDXL. Please do read the version info for model specific instructions and further resources. If you are the author of one of these models and don't want it to appear here, please contact me to sort this out. The naming scheme of the files follows lllyasviel's from here https://huggingface.co/lllyasviel/sd_control_collection/tree/main.
CAUTION: The variants of controlnet models are marked as checkpoints only to make it possible to upload them all under one version, otherwise the already huge list would be even bigger. If you use downloading helpers the correct target folders are extensions/sd-webui-controlnet/models for automatic1111 and models/controlnet for forge/comfyui.
Requirements for Automatic1111: at least version 1.5.0, better upgrade to the latest version of 1.6.0. + sd-webui-controlnet extension v0.400+. Bugs or weird behaviour might occur. If you encounter any irregularities you can join us on our discord and ask for support or get in contact with the developers of Automatic1111/Controlnet via github/huggingface.
A huge thanks to all the authors, devs and contributors including but not limited to: abovzv, bdsqlsz, BRIA AI, CVL-Heidelberg, destitech, the diffusers institution, Eugeoter, h94, Hetaneko, HighCWu, huchenlei, lllyasviel, kataragi, kohya-ss, Mikubill, PromeAI, SargeZT, ShermanG, Stability.ai, TencentARC, TheMistoAI, thibaud, ttplanet, windsingai and xinsir.
Description
Inpainting model released by Kataragi on huggingface
https://huggingface.co/kataragi/controlnetXL_inpaint_test
FAQ
Comments (16)
Did we finally get a good openpose for sdxl??
maybe? please give us some feedback if you test it :)
and, no, i have no idea what the twins model is for or what the difference is. perhaps it specifically trained to generate pairs of people? if the author responds to my question you'll find their answer here: https://huggingface.co/xinsir/controlnet-openpose-sdxl-1.0/discussions/1
@eurotaku Yes I was trying to find out about that as well lol
As far as I can tell, the answer is NO. The performance is inconsistent. If you turn up the strength (~1.3) and set the ending step a bit early (0.5-0.7) it works ok for simple poses (using full open pose won't work) but it also tends to bork the image quality, making everything more blurry and plastic. I think the OpenPoseXL2 model still gives better results (note I said better, not great). And if you use DW Full with OpenXL2 you tend to get MUCH better results than with xinsir. Of course SD1.5 OpenPose remains the GOAT.
@epzer yep, openpose seems to be especially hard to train a good model for :(
@diogod @epzer the author's response: "it is a model with more precise pose and lower aesthetic score, the two model are different styles you can try it."
@eurotaku From my tests it reliably offers neither.
Have you tried this one? It's not as good as 1.5 but still quite good IMO
https://civitai.com/models/136070?modelVersionId=151451
not working at all
Tons of models. pls give a rating as your best recommendations or a description for each.
every version info should include a link to the original huggingface repository. the model card is usually the only source of info about a model and sadly i don't have the time to test and retest all these models for a thorough and up2date comparison. but in general i think you can say that you should not expect the same level of consistency as with the sd1.5 ones, still they are all worth a try i would say. the ones i use the most are probably the stability.ai and kohya-ss ones, plus the ip-adapters.
@eurotaku thanks for the info and models
yeah i agree. there are too many models everywhere. I got several terabytes of models now i need to cut it down a little lmao
@yofoton174609 Nah.... just buy more drives. :-D I wish my cable provider who claims "no download limit" would stop charging me more each month because I dl'd too much. I call it the "CiviTax". LOL
Best for depth with self-made masks and no pre-processor for me are bdsqlsz, kohya, hetaneko and diffusers. I find they're very everything-sensitive...LOL. Even more so than 1.5. But I am managing to get some semblance of control out of them, and not just smashing everything flat.
@parallelepipedon wow thanks for the info. what would you say is the depth model that gives more freedom to output while preserving pose. with or without preprocessor.
