CivArchive
    ControlNetXL (CNXL) - ecomxl-softedge
    Preview 12042315

    NEW 2vXpSwA7: anytest-v4 | openpose-v2_1 || abovzv: segment || bdsqlsz: canny | depth | lineart-anime | mlsdv2 | normal | normal-dsine | openpose | recolor | segment | segmentv2 | sketch | softedge | t2i-color-shuffle | tile-anime-α | tile-anime-β | tile-real || BRIA AI: bg-gen | canny | colorgrid | depth | NEW fill | openpose | recolor || CVL-Heidelberg: canny | depth || destitech: inpaint | inpaintv2 || diffusers: canny small | mid | full | depth small | mid | full | zoe || EcomXL: inpaint | softedge || Eugeoter: NEW noobai canny | depth | lineart-anime | lineart-real | mangaline | normal | scribble-pidi | scribble-hed | softedge-hed | tile | sdxl anime-canny | vidit-depth || h94: ip-adapter | ipa-vith | ipa+ | ipa+face || Hetaneko: pony canny | cannyv2 | color | depth | replicate | replicatev2 || HighCWu: canny-v3 || huchenlei: PuLID || Kataragi: canny | flatline | flatline-lora | inpaint | NEW ipa | line2color | line2color-lora | lineart | lineart-lora | NEW noob-ipa | recolor | recolor-lora | NEW rough-coating | tori29-blur | xdog-sketch || kohya-ss: real blur | canny | depth | anime blur | blur-beta | canny | depth | openpose | openposev2 | replicate | replicatev2 | scribble || PromeAI: lineart || ShermanG lineart || Stability.ai: canny | depth | recolor | revision | sketch || SargeZT: depth-16b-zoe | depth-faid-vidit | depth-zeed | depth-zoe | softedge | t2i-adapters binary | canny | color | depth | segmentation | softedge || TencentARC: canny | depth-midas | depth-zoe | lineart | openpose | recolor | sketch || TheMistoAI: mistoline | mistoline-lora || thibaud: openpose | openpose-lora || ttplanet: tile-real | tile-realv2 || NEW windsingai: pose | tile | tile-10w || xinsir: canny | cannyv2 | depth | openpose | openpose-twins | scribble | scribble-anime | tile | union | union-promax

    ControlNetXL (CNXL) - A collection of Controlnet models for SDXL

    (13.01.2025 - First NoobAI controlnets uploaded by Eugeoter)
    (12.01.2025 - First Illustrious controlnets uploaded: windsingai-pose & -tile)

    This collection strives to create a convenient download location of all currently available Controlnet models for SDXL. Please do read the version info for model specific instructions and further resources. If you are the author of one of these models and don't want it to appear here, please contact me to sort this out. The naming scheme of the files follows lllyasviel's from here https://huggingface.co/lllyasviel/sd_control_collection/tree/main.

    CAUTION: The variants of controlnet models are marked as checkpoints only to make it possible to upload them all under one version, otherwise the already huge list would be even bigger. If you use downloading helpers the correct target folders are extensions/sd-webui-controlnet/models for automatic1111 and models/controlnet for forge/comfyui.

    Requirements for Automatic1111: at least version 1.5.0, better upgrade to the latest version of 1.6.0. + sd-webui-controlnet extension v0.400+. Bugs or weird behaviour might occur. If you encounter any irregularities you can join us on our discord and ask for support or get in contact with the developers of Automatic1111/Controlnet via github/huggingface.

    A huge thanks to all the authors, devs and contributors including but not limited to: abovzv, bdsqlsz, BRIA AI, CVL-Heidelberg, destitech, the diffusers institution, Eugeoter, h94, Hetaneko, HighCWu, huchenlei, lllyasviel, kataragi, kohya-ss, Mikubill, PromeAI, SargeZT, ShermanG, Stability.ai, TencentARC, TheMistoAI, thibaud, ttplanet, windsingai and xinsir.

    Description

    Softedge model released by EcomXL on huggingface
    https://huggingface.co/alimama-creative/EcomXL_controlnet_softedge

    FAQ

    Comments (31)

    thiagojramosMay 10, 2024· 7 reactions
    CivitAI

    If I may make a few suggestions: could you provide a changelog? I think it would be easier (for both us and you) to know which models were added and/or updated. Maybe include the changes in the description as a code block or in a pastebin.

    Example:

    CHANGELOG

    - 05/10/2024

    Added: BRIA AI bg-gen (SDXL)

    Updated: hetaneko-color (Pony)

    Updated: Post title to Something

    - 05/09/2024

    Updated: SargeZT depth-16b-zoe v2 (SDXL)

    Known issues added: The fp32/fp16 versions of Stability.ai's control-loras are marked as such to allow uploading both under a single version.

    Another suggestion would be to format this list of models in the description, it could look something like this:

    Canny:

    bdsqlsz canny

    CVL-Heidelberg canny

    diffusers: canny small | canny mid | canny full

    Hetaneko (Pony): canny | cannyv2

    kohya-ss canny | replicate | replicatev2

    Stability.ai:canny

    TencentARC canny

    BRIA AI canny

    Depth:

    bdsqlsz: depth

    CVL-Heidelberg: depth

    diffusers: depth small | mid | full | zoe

    Hetaneko (Pony): depth

    kohya-ss: depth | replicate | replicatev2

    SargeZT: depth-16b-zoe | depth-faid-vidit | depth-zeed | depth-zoe

    Stability.ai: depth

    TencentARC: depth-midas | depth-zoe

    BRIA AI: depth

    Inpaint:

    destitech: inpaint

    diffusers: inpaint

    EcomXL: inpaint

    Lineart:

    bdsqlsz: lineart-anime

    TencentARC: lineart

    MLSL:

    bdsqlsz: mlsd-v2

    Normal:

    bdsqlsz: normal

    Openpose:

    bdsqlsz: openpose

    kohya-ss: openpose | openposev2

    thibaud: openpose | openpose-lora

    TencentARC: openpose

    Recolor:

    bdsqlsz: recolor

    Hetaneko (Pony): recolor

    Stability.ai: recolor

    TencentARC: recolor

    BRIA AI: recolor

    Segment:

    bdsqlsz: segment | segment-v2

    Scribble:

    kohya-ss: scribble

    Sketch:

    bdsqlsz: sketch

    Stability.ai: sketch

    TencentARC: sketch

    Softedge:

    bdsqlsz: softedge

    diffusers: softedge

    EcomXL: softedge

    SargeZT: softedge

    TencentARC: softedge

    t2i adapters: softedge

    Tile:

    bdsqlsz: tile-anime-α | tile-anime-β | tile-real

    ttplanet: tile-real

    Blur:

    kohya-ss: real blur | anime blur | blur-beta

    Color:

    Hetaneko (Pony): color

    t2i adapters: color

    BRIA AI: colorgrid

    Revision:

    Stability.ai: revision

    T2I:

    bdsqlsz: t2i-color-shuffle

    t2i adapters: binary | canny | color | depth | segmentation | softedge

    IP Adapter:

    h94: ip-adapter | ipa-vith | ipa+ | ipa+face

    bg-gen:

    BRIA AI: bg-gen

    thiagojramosMay 10, 2024

    Ugh... I hate the formatting of posts here on Civitai, especially the comments 🖕🤬🖕

    eurotakuMay 10, 2024

    @thiagojramos the only thing in a changelog would be the info what has been added lately, and that info you can also get from the compact list in the description. sadly real info about the different models is scarce and on the other hand i don't have the time to scour all threads on github/huggingface/etc for more. i did think about adding a differently sorted list, especially to distinguish between classic and lllite controlnet models, because comfyui user need to know that to be able to use the correct workflow/nodes for them. and indeed you are right, the current model page isn't made for these kinds of collections, the compact link list more or less only has the purpose to save you the hassle to click through the long version list at the top. let's see what i can do after adding all the new models. thanks for your feedback :)

    altoiddealerMay 10, 2024· 4 reactions
    CivitAI

    Regarding the new Inpainting model - I got it working in Forge (should work in A1111 the same) by doing the following:
    1. Use in txt2img tab. 2. Add your image to a controlnet. 3. Set the preprocessor to "None" 4. Inlcude a "Mask" where it's actually the opposite of how you may think... white = don't inpaint. black = inpaint.

    Follow up edit: The model also works in Img2Img but its even clunkier to use. Again, the preprocessor must be "None", and now it needs to be "Upload independent image" (the same image) which allows checking "Use map".

    EDIT AGAIN - the Img2Img results are actually quite good. As with the txt2img method, the "Not inpainted" image will change ever so slightly. The benefit here is that there is no halo effect that you get with normal inpainting. To test this, go ahead and do a loopback generation with like 12 loops, there will be no halo whereas with normal inpainting there is essentially a solid outline by this point.

    This also seems to have some sort of effect in Inpainting... the map can be the same image used as an Inpaint Mask except inverted. Doesn't seem worth the trouble, though.

    eurotakuMay 10, 2024

    awesome info, would pin your comment, if i could :D (soon™)

    altoiddealerMay 13, 2024· 2 reactions

    I have another comment to add...

    I just started messing around with the ECom Inpainting model, and this one behaves like you would normally expect - you set a preprocessor and it just influences the inpainting result.

    I noticed that this inpainting model has some pros and cons. In img2img, this does still create an edge like normal inpainting, but it tries to hide it better. This seems to work very good in txt2img, however it is not influenced much by other ControlNets. This one inpaints the White part of the mask and does not affect the black part.

    The desitech Inpainting model also has its Pros and Cons. The main Con is that it affects the unmasked areas, however it is easily guided by other ControlNets. Like I mentioned earlier, the mask needs to be inverted.

    I had the crazy idea that maybe both models could be used together at half weight...?
    As crazy as it sounds, it works super good using them in combination (in txt2img). I used the same mask just inverted for one. Result: the region that we want inpainted is mainly the one being affected, and other ControlNets are effective. You'll have to try it yourself to see.

    Again:

    Ecom: Needs a preprocessor. Mask is white (inpaint) black (don't inpaint).
    desitech: Preprocessor: "None". Mask is black (inpaint), white (don't inpaint).

    eurotakuMay 17, 2024· 3 reactions

    @altoiddealer thanks again :)

    altoiddealerMay 17, 2024

    Following up with another comment!

    I had tested all sorts of combinations of the two inpainting models in unison... Using one at full strength and switching to the other at various steps... using different combinations of weights, including > 1.0 total weights...

    I've been most pleased with the results I had initially - using 0.5 weight for both models, for full length. It's just an amazing combination. The difference is pretty significant even with a 0.4 / 0.6 balance, but does not trump the 50/50 split

    altoiddealerMay 23, 2024· 1 reaction

    Here I am, following up again now with comments on the Kataragi Inpainting model.

    Like the ECom inpainting model, this works as most would expect... requires a preprocess such as Inpaint Only, and the mask should be white = denoise, black = Do not denoise.

    Of the three inpainting models, this one seems to adhere the most to the Checkpoint's style, and has good composition, and plays nice with other ControlNet models.

    The downside is that the style does not blend so well with the source image, the edge can be more pronounced.

    I had previously suggested using a 50/50 split of the Ecom and Desitec models.
    The Kataragi and Desitec models also work better together in a 50/50 split, than on their own.
    The output is the best so far, except for the edges which do not blend quite as nice as the Ecom + Desitec combo.

    The Katagari + Ecom combo does not work well with other ControlNets. Do not recommend this combo.

    OolooloolJun 3, 2024

    "RuntimeError: Given groups=1, weight of size [16, 5, 3, 3], expected input[1, 3, 1024, 1024] to have 5 channels, but got 3 channels instead"

    What do I do now?
    What do I do now?

    I put a colored PNG in the image field. I put a black-and-white PNG in the "Effective region mask" field. I hit "generate." I get the above error message.

    What kind of an input has five channels anyway? Even a PNG with an alpha channel has only four. What five channels??

    Edit: I just entered an PNG with an alpha channel but it's still complaining that the input has only three channels. I have no idea what's going on.

    (Incidentally, this is A1111.)

    Also there is no option "upload independent image."

    altoiddealerJun 3, 2024

    @Ooloolool Which inpainting model? In txt2img or img2img?
    These models all behave best outside of the Inpainting tab, IMO.
    I'm using Forge, personally (not dev2 branch, main branch)

    OolooloolJun 3, 2024

    @altoiddealer AlbedoBaseXL is the model I picked. In text-to-image. Both the image and the mask are 1024x1024. The mask is an RGB PNG, but consisting literally out of only black or white pixels.

    Also, what's Forge?

    OolooloolJun 3, 2024

    @altoiddealer Are we even talking about the same thing? I don't know my way around these comment sections. I'm at the BRIA AI Background Generator and I thought these comments were specifically about that. It turns out they're about all the models listed and you're talking about a completely different one, I'm guessing.

    Sorry, never mind. Apparently there is little documentation regarding these models and hardly anyone knows their way around them. I'm going to add an independent comment inquiring whether anyone knows how to use BRIAAI bg-gen...

    StandspurfahrerMay 10, 2024· 5 reactions
    CivitAI

    Thank you for keeping this collection constantly updated.

    eurotakuMay 10, 2024· 2 reactions

    appreciated :)
    although honestly the current update mainly happended thanks to NatanS8's comment about the pony-specific controlnet models. great to see our community working so well together.

    eurotakuMay 20, 2024· 1 reaction

    new stuff incoming! :)

    huydejokka123May 11, 2024· 19 reactions
    CivitAI

    the arrangement in this thread is the worst, i prefer you add every model in spread thread rather than this full miss, every time i open the thread i see only old models, and to see the new models i have to check all models here to figure out what's new !!!!!!!!!!!!!!!!!! :( :( :(

    eurotakuMay 11, 2024· 1 reaction

    what do you mean with "spread thread" and "full miss"? no idea where you need to check all old models to see what's new, the list at the top of the description clearly shows which models are new and you can jump directly to the corresponding version by clicking on a model name.

    codegixJun 23, 2024

    Look at the majority thumb

    BizzAIMay 11, 2024
    CivitAI

    I have been looking for hours and cant seem to find any .yaml files assorciated with these control net models, please what am i missing here, im running Easy Diffusion?

    eurotakuMay 11, 2024

    there are no .yaml config files for these models. in both comfyui and auto1111 they do work without them. does easy diffusion require such files for them? if so, you probably should try to contact the dev(s) of easy diffusion about this.

    serget2May 13, 2024
    CivitAI

    Maybe a noob question but what nodes or workflow do I use here? the regular controlnet breaks down in error when I apply it? Yes it is up to date

    eurotakuMay 13, 2024· 1 reaction

    depends on the model you use, if the normal controlnet nodes error out, you probably are using a controlnet lllite, you can find the necessary nodes and example workflows here https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI

    serget2May 13, 2024· 1 reaction

    @eurotaku Thank you very much I will try tomorrow, because it is getting late, I will let you know

    serget2May 14, 2024

    @eurotaku Managed to get it to work a bit, but I am still going to need a decent workflow to get it to work, a noob like me only get's so far, some canny models (only tried canny) do not work and instead of turning one image to a canny I loaded a canny from the internet, pretty basic, it works but very amateuristic

    eurotakuMay 17, 2024· 1 reaction

    @serget2 you mean the mask? you need the preprocessing nodes to do that, those usually download their detection models automatically in the background from huggingface or github. the models here in this collection are solely used to create new images from those preprocessed masks.

    shadow0May 14, 2024· 2 reactions
    CivitAI

    Warning: bdsqlsz Openpose was a huge waste of time for me that simply didn't work in the latest A1111. You might want to pick another distro like TencentARC instead.

    altoiddealerMay 15, 2024· 1 reaction

    Personally, I recommend the thibaud version although it is pretty heavy model. https://civitai.com/models/136070?modelVersionId=151451

    Tozi_WhiteMay 20, 2024

    @altoiddealer Any more recommendations? I have plenty of models for control net and almost none work well in SDXL. Thanks to this advice, openpose is finally working for me.

    altoiddealerMay 20, 2024· 1 reaction

    @iNcUb Here are a few more recommendations:
    - The Softedge model by sargezt
    - The diffusers "Full" models from this link work very good for canny and depth:
    https://huggingface.co/lllyasviel/sd_control_collection/tree/main
    - The t2i-adapters models have a little more leeway by comparison (canny, depth), which can be a good thing (potentially more creative output).
    - In another comment, I wrote about the Inpaint models... tricky to work with but very good results.
    I don't use other Controlnets really

    eurotakuMay 20, 2024

    @iNcUb you can also try to raise consistency with controlnet by combining two or more controlnet models, probably need to lower their weights e bit then to avoid overburning the controlnet guidance into your image

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    859
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/10/2024
    Updated
    4/30/2026
    Deleted
    -