CivArchive
    Preview 112225
    Preview 112064
    Preview 112063
    Preview 112062

    STOP! THESE MODELS ARE NOT FOR PROMPTING/IMAGE GENERATION

    These models are extracted from the base ControlNet models in a slightly different way from the others. They produce different results due to a different extraction method.

    These are the models required for the ControlNet extension, converted to Safetensor and "pruned" to extract the ControlNet neural network. I have tested them, and they work.

    These models are embedded with the neural network data required to make ControlNet function, they will not produce good images unless they are used with ControlNet.

    These models were extracted using the extract_controlnet_diff.py script, and produce a slightly different result from the models extracted using the extract_controlnet.py script.

    The original version of these models in .pth format can be found here. BUT YOU DO NOT NEED THESE .pth FILES! The files I have uploaded here are direct replacements for these .pth files!

    • control_sd15_canny

    • control_sd15_depth

    • control_sd15_hed

    • control_sd15_scribble

    • control_sd15_normal

    • control_sd15_openpose

    • control_sd15_seg

    • control_sd15_mlsd

    Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models directory.

    Note: these models were extracted from the original .pth using the extract_controlnet_diff.py script contained within the extension Github repo. Kohya-ss has them uploaded to HF here.

    Description

    FAQ

    Comments (70)

    hjgkhgkjhgkhjgkjhgFeb 17, 2023
    CivitAI

    Hello, what would be the objective, that they be lighter?

    theally
    Author
    Feb 17, 2023

    Correct, the file size is greatly reduced.

    hjgkhgkjhgkhjgkjhgFeb 17, 2023

    @theally Hello, I just downloaded the file and the truth is, it worked better than the original, I got better results... the problem is that I don't know how to download the others, there's only the canny

    theally
    Author
    Feb 17, 2023· 2 reactions

    @hjgkhgkjhgkhjgkjhg Scroll down the page, on the left hand side there are "versions" - one version for each Model.

    creedukFeb 17, 2023· 3 reactions

    There are 2 sets (other) that seems to have subtle differences, not sure what make them work different. @theally could you use same seed to show the difference? I noticed some samples also had a different neg prompt so output would vary.

    wktraFeb 17, 2023· 1 reaction

    @theally although we appreciate the effort, Civitai is getting slammed with uploads of models that lack description and purpose.

    You've uploaded two similar sets of controlNet models. And your description for what seems like a duplicate is "THESE ARE DIFFERENT."

    Could you be a little bit more specific please???

    theally
    Author
    Feb 17, 2023

    @wktra If you read the description, you'll see that it's plainly set out that the difference models are extracted using extract_controlnet_diff.py - the script is in the Github repo, if you'd like to examine it. It pulls the difference between the 5GB model and SD 1.5.

    bluutrrr11Feb 24, 2023· 2 reactions

    @theally actually the description is not clear at all. We want to know what the difference is in how the images look, not in how the model was extracted.

    theally
    Author
    Feb 24, 2023

    @bluutrrr11 I recommend you download them and test it out, there's not much difference really.

    @theally Not much difference in the file size, either, from what I can tell. Canny v1.0 is 689.12. Canny Difference is 689.13. Wait... that's actually slightly bigger?

    I'm super new to all this. I don't have a CS degree. I'm limping along on a borrowed laptop with 4k of Vram and 100 gigs of disk space.

    Which of your models do I want to download? And why?

    I'd even settle for a link to whatever documentation explains why one would choose to extract using the difference.py instead of the regular one.

    I think most of the people in this thread are probably as ignorant as I am. I'm just more practiced at explaining what I don't understand about things.

    You've done us a great service by extracting and uploading all these files. Please just help us appreciate what it is you've actually done. Does Difference need to be used with Vanilla 1.5, for example? Or will any model derived using 1.5 as the base work?

    That's probably quite an ignorant question I just asked. But I insist with all sincerity that that's only because I'm a clueless noob riding the hype train. I have literally no idea what I'm looking at, here.

    Please help. Please send brains.

    Hypothetically speaking, why would a typical user prefer to use the Difference Canny file over the ~1kb smaller standard Canny file, or vice versa?

    theally
    Author
    Apr 3, 2023

    @TranscendentThots The problem is, I haven't extensively tested the differences between these and the other ControlNet models - not when it comes to image generation. Sure, I can explain to you why they're different on a technical level, but not what they actually do to your images.

    The people who made the original ControlNet models released two scripts to "prune" (extract) smaller models from their huge OG ControlNet files. One was kind of a "brute force" script which extracted the required info. The second performed a "difference" extraction, by comparing SD 1.5 to the ControlNet model, and subtracting the difference, leaving only what's needed for ControlNet to work. Both sets of models perform exactly the same operations, but the extraction method is different, and because of that, the end results are not 1:1. To know exactly what the changes are between the two methods you'd have to dig into their Python.

    Personally, I ran some super quick tests, chose the Difference models over the other set, and didn't look back. But some folk who've tested extensively say they prefer the other set. Some people even say the TencentARC set are better (nope!), so - it's personal preference, and ControlNet works pretty much the same whichever set you choose.

    TranscendentThotsApr 3, 2023· 1 reaction

    @theally Huh. Interesting...

    I suppose once I get used to using ControlNet, I'll do a few renders with the same seed, changing only the model, and see what happens. I wonder how hard it would be for someone with way more computing resources than me to make... say, a Discord Bot that generates the same image using two of these models... or possibly the same prompt and any two models... and compares the results side-by-side. Maybe even with labels showing the difference? I feel like that would be a good learning resource. (Then again, I might be biased since each image takes me ~15 minutes, lol.)

    Thanks again for the upload, and thanks for engaging with my (admittedly, somewhat prickly) comment.

    theally
    Author
    Apr 3, 2023

    @TranscendentThots No problem! I don't have a problem explaining things - I run a super popular Patreon for SD and Generative AI tutorials (almost 200 Patrons!), so I'm happy to help - with this one though, it really does come down to personal preference. I could set up the tests and make the x/y grids and recommend a particular model, but this one is really up to the user, the differences are so slight. Good luck!

    TranscendentThotsApr 15, 2023

    @theally Further thought, and this might be getting a bit philosophical, but... if we don't understand what any of the models actually do? Why are we downloading any of them in the first place? It's like, "oh, I saw a pretty picture online, but I don't know what other files they were using, so I can't reproduce it." What are we even doing here? The more I learn about SD, the dumber I feel.

    aximanFeb 17, 2023
    CivitAI

    What does this new update do?

    theally
    Author
    Feb 17, 2023· 1 reaction

    It's explained in the model description - it's just a different method of extracting the neural net which powers ControlNet, produces slightly better results.

    wktraFeb 17, 2023

    @theally ok, so are you saying that these DIFFERENCE models produce slightly better results than the OTHER extracted controlnet models posted here yesterday?????

    theally
    Author
    Feb 17, 2023· 1 reaction

    @wktra Yes, correct. It's subjective though.

    SD_AI_2025Feb 18, 2023

    @wktra not "Better". Or they'd be called : ControlNet-better-modules

    Good or bad is : subjective. And so is "better". They are : different.

    Results of extraction methods are explained here : Transfer Control to Other SD1.X Models and here : Control model offsets and some others

    drakmourFeb 17, 2023· 1 reaction
    CivitAI

    As I understand it, you are well versed in the technical side of creating such a thing. Maybe you can can answer the question, the Pose and Pose with hands lacks some kind of detection of the the depth (like one of models) of characters, determine which are closer, which are farther, and what limbs or body parts are behind the body or objects. For example if the model put her hands behind back, it thinks just that they are not visible and make non visible parts random but in front, not behind. And same for characters that are behind other characters, it tries to draw them not behind. Or maybe there is a way to show that limb or part body are behind something with this pose skeleton? Maybe make a special coloration, that turns brighter \ different color if it is closer\further to viewer or if it is behind something.

    eric_daleFeb 17, 2023
    CivitAI

    Tested a few images for Open Pose - it generates a blank black canvas

    theally
    Author
    Feb 17, 2023

    Hey, I've had some of the other mods test this - we can't reproduce that behavior. We're getting correct OpenPose skeleton generation for the OpenPose (Difference) model. Do you get an error in the console?

    eric_daleFeb 17, 2023

    @theally I think it largely depends on the input image. Some of my inputs can be correctly interpreted and some couldn't. I guess it can't be used to interpret black and white line art. Others (colored) are mostly fine.

    ByteCrafterFeb 19, 2023

    @eric_dale this is correct the input images needs either most of there skin showing or the clothes to be of a skin tight style, if they are too baggy they will not work.

    Blackhat98Feb 17, 2023
    CivitAI

    This is the cool of open source and the community.

    ninjasaid13Feb 17, 2023· 1 reaction
    CivitAI

    These models are extracted from the base ControlNet models in a slightly different way from the others. They produce different results.

    what's the difference?

    theally
    Author
    Feb 17, 2023

    It's mentioned in the description. One is extracted with a script called extract_controlnet.py, the new ones are extracted with extract_controlnet_diff.py which compare the 5GB model to SD 1.5 and produce the difference.

    TangBohuFeb 17, 2023

    @theally Which one is the officially recommended one

    theally
    Author
    Feb 17, 2023

    @yaxilani The 5GB pre-trained pickletensor models - only requiring ~45 GB of disk space :)

    Seriously, both the pre-trained extracts are mentioned on the front page of the Github repo, there isn't a preferred one.

    l3luel3illApr 12, 2023

    @theally but what is the diffrerence in the result?

    theally
    Author
    Apr 12, 2023· 1 reaction

    @l3luel3ill that's not something that can be explained, you'd have to try them out. They both do the same thing, with almost imperceptible differences.

    smolspiteFeb 19, 2023
    CivitAI

    What abot openpose_hand preprocessor? The result it gives don't work with openpose model for me.

    theally
    Author
    Feb 19, 2023

    Hmm, check out the GitHub. If I recall correctly, the Hand Preprocessor wasn't working or implemented, I believe. Someone did extensive tests with it in the issues/comments, couldn't get it to work.

    smolspiteFeb 19, 2023· 1 reaction

    @theally Preprocessor works fine for me from the box, outputting skeleton with all hand/finger bones.

    theally
    Author
    Feb 19, 2023

    @smolspite Outrageous! I'll have to give it a go. I was thinking of this.

    CivitLexFeb 22, 2023

    The OpenPose preprocessor can fail if it can't interpret a result unlike the other preprocessors and output a blank image.

    On Auto1111 if you use the OpenPose extension you can load an image, tweak the skeleton and then push it to the txt2img controlNet properties which allows you to see if you have a undetectable (or wrong) pose before doing a generation. Lastly you can set the preprocessor on the controlNet properties to 'none' because the pose has already been generated.

    PolygonFeb 21, 2023· 1 reaction
    CivitAI

    Is Seg not working?

    theally
    Author
    Feb 21, 2023· 1 reaction

    I haven't been able to get it to work. At first there was a dependency which I guess I didn't have, but they removed that, so it should work - but I still get an error! So, who knows 🤷‍♀️

    PolygonFeb 22, 2023

    @theally Ok, just checking. Hopefully we get it up soon, it's an interesting way of working. Thank you!

    smokewFeb 22, 2023
    CivitAI
    theally
    Author
    Feb 22, 2023

    It looks like the ones you linked are not Difference models. There are two ways to extract ControlNet models from the original 5GB checkpoint files, the ones I have uploaded here - the "Difference" - models, are created by grabbing the difference between the ControlNet models and SD 1.5, so, yes, the linked models will produce a slightly different result.

    EricRollei21Jun 14, 2023

    @theally Should we copy the yaml files from the non difference models and name them to match the difference files? Bonus question - when do you go for the difference cnet models over the regular ones?

    theally
    Author
    Jun 14, 2023· 1 reaction

    @EricRollei21 you should be using ControlNet 1.1 files now - assuming your ControlNet is up to date (they're also on my profile). I just left these models up for historic/archival purposes.

    EricRollei21Jun 16, 2023· 1 reaction

    @theally Thank you for the reply. I think I have the 1.1 files... Things changing fast with all AI ! Appreciate your help and all the models and things you've shared!

    johnslegersFeb 23, 2023
    CivitAI

    I learnt about the existence of ControlNet last week or so. I only tried the demo @ https://huggingface.co/spaces/hysts/ControlNet so far but was impressed enough with the output to want to play around with ControlNet further soon.

    I don't understand what these new models are for, though. Can you recommend any sources on how to use them? Also, where did do get your info on how to produce these models?

    theally
    Author
    Feb 23, 2023

    Hi there! So, couple of things, I really don't know the best place to find tutorial content on ControlNet, but I myself have a comprehensive ControlNet tutorial for my Patrons. If you use Automatic1111's WebUI, you'll be able to take advantage of ControlNet.

    As for the models, I didn't create these, as such. The people who made ControlNet released pre-trained models which were 5GB each, containing all the necessary code to perform the different ControlNet functions. They also released two scripts which could be used to "extract" the neural network code/functions from these models, making them a whole lot smaller. That's what these are - extracts of the official models. You can also find official extracts on the Huggingface website.

    To use ControlNet you'll need either these models, or the official ones from Huggingface - they both do the same things.

    Just to make it a little more confusing, there are two sets of models for ControlNet currently - a "standard" set, and a "difference" set - just two slightly different ways of extracting the neural net from the original models.

    Hope that helps! x

    ghettoandroidFeb 23, 2023· 4 reactions
    CivitAI

    Thank you! This is actually saving me some money on Google Colab. 🙌🙏

    kalmmadJul 17, 2023

    do you use this on google colab haw?

    MagicalEroticaFeb 25, 2023· 1 reaction
    CivitAI

    I don't understand. Where do I put this one, and if I have them already, do I need this or is there and advantage I'm not aware of, perhaps the size. You should be clearer in your description.

    theally
    Author
    Feb 25, 2023

    Not really sure how I could be clearer - they're models for ControlNet, extracted from the full size 5GB models, using a script which computes the difference between those models and SD1.5. They produce different results from the original models, and from the other extracted models I uploaded, but whether they're "better" is a personal preference thing. You only need one set of ControlNet files.

    MagicalEroticaFeb 25, 2023· 1 reaction

    So where would I put a safesensor when the originals were pth? Do I use that one model as a replacement for all of them?

    theally
    Author
    Feb 25, 2023

    @MagicalErotica correct, they're drop-in replacements for the originals - get rid of the .pth, drop in the .safetensors.

    MagicalEroticaFeb 25, 2023

    @theally Excellent. I have been doing much research. It seems like the new update allows for more than one controlnet at the same time. If so, and nothing is different, then the same model can be loaded into the gpu, saving some 700 megabytes for each extra, but I haven't tested it out. Theoretically, only one should be loaded. I have to download this and try it out.

    MagicalEroticaFeb 25, 2023

    In fact, you would clearly see the difference if you use the old one compared to the new one for each control model used in parallel.

    MagicalEroticaFeb 25, 2023

    All test have been completed. They are identical. Also, when you load more than one, only one model is loaded, so there is MASSIVE Gpu memory saving.

    You sir, are genius.

    MagicalEroticaFeb 25, 2023

    I spoke too soon, The results are different when you use parallel models. The results are better if you don't use this model in parallel.

    MagicalEroticaFeb 25, 2023· 2 reactions

    Final note, there is a difference that we cannot see. These small difference make the results worst in my opinion. It adds a lot of noise the final image (not enlarged).

    MagicalEroticaMar 8, 2023· 3 reactions

    Visiting this thread from 2 weeks ago, it longer makes sense because the files are different. For the sake of historical bookkeeping: There was only one file offered and it was advertised to be capable of replacing all of the other .pth models. The results were that, yes - only one file could do the job close to the originals, but a closer look showed that there were added noise to the files.

    2awes_f88n2May 14, 2023· 4 reactions
    CivitAI

    Are there YAML files for these? Apparently SD has issues with finding them. Thank-you.

    KlashMay 16, 2023· 1 reaction

    yamls should be already downloaded once you download controlnet from extensions, otherwise go to here: https://github.com/Mikubill/sd-webui-controlnet

    make sure that the setting>controlnet path is corrected to this stable-diffusion-webui\extensions\sd-webui-controlnet\models\cldm_v15.yaml

    you shouldnt get errors after this hopefully.

    bccafeMay 20, 2023
    CivitAI

    Do we need the "yaml' files to go with ?Please let us know.

    theally
    Author
    May 20, 2023

    Yes, for 1.1 you need the yamls, or the models won't load.

    ZUSIMOOct 13, 2023

    @theally That's just it the YAML isn't on the github anymore, only v15

    amywild2003867May 27, 2023
    CivitAI

    The console reported an error, did I get it wrong: ERROR: You are using a ControlNet model [controlnetPreTrained_openposeDifferenceV1] without correct YAML config file

    theally
    Author
    May 27, 2023

    Hey, so, you should really be using the ControlNet 1.1 models now - I also have those uploaded. They require a .yaml file (also provided) to function.

    shihanqJun 8, 2023

    @theally Where are the .yaml files provided? When I click download, I only get the .safetensor

    theally
    Author
    Jun 8, 2023

    @shihanq No YAML required for these models - these are the old V1.0s before the YAMLs were necessary. You'll want https://civitai.com/models/38784/controlnet-11-models ControlNet 1.1 - which has the YAMLs provided alongside each model.

    HaiderAliJun 17, 2023
    CivitAI

    Hey Please help me, I have installed openpose, depth map, and soft edge, all is working but openpose is not working I did everything, It is giving me this error RuntimeError: unexpected EOF, expected 2735762 more bytes. The file might be corrupted. please help

    dream3Feb 10, 2024
    CivitAI

    HELLO

    what is the difference between this set and the DIFF set?

    Tzurababie145Oct 29, 2024

    Yeah I also want to know.... but i guess we should not be using older versions or idk what the impact would be if combined with using it for sdxl or flux