STOP! THESE MODELS ARE NOT FOR PROMPTING/IMAGE GENERATION
Note: These are the OG ControlNet models - the Latest Version (1.1) Models are HERE.
These are the models required for the ControlNet extension, converted to Safetensor and "pruned" to extract the ControlNet neural network. I have tested them, and they work.
These models are embedded with the neural network data required to make ControlNet function, they will not produce good images unless they are used with ControlNet.
The original version of these models in .pth format can be found here. BUT YOU DO NOT NEED THESE .pth FILES! The files I have uploaded here are direct replacements for these .pth files!
control_sd15_canny
control_sd15_depth
control_sd15_hed
control_sd15_scribble
control_sd15_normal
control_sd15_openpose
control_sd15_seg
control_sd15_mlsd
Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models directory.
Note: these models were extracted from the original .pth using the extract_controlnet.py script contained within the extension Github repo.
Please consider joining my Patreon! Advanced SD tutorials, settings explanations, adult-art, from a female content creator (me!) patreon.com/theally - I also have a write-up of ControlNet and will be updating with the latest news/developments!
Description
FAQ
Comments (73)
Thanks for uploading them here, HF gives me hives. XD
Oh, and they're pruned, double awesome!
Has there been any differences noticed? or is it really perfectly the same but less size
I tested, was able to achieve 1:1 results with the full size models. They're not "pruned" in the traditional way - the Neural Net used by ControlNet is extracted with a special script.
@theally much appreciated
This might be really stupid but what's the "other stuff" in those 5G files then?
@mindframe this also applies to full versions of trained checkpoints vs pruned versions - the pruned versions have discarded data used in the training of the larger models, which doesn't affect the actual image generation (or affects it very slightly).
You'll sometimes see the term full-ema on unpruned models, this refers to Exponential Moving Average, and is a checkpoint on which training can be resumed. These 5GB files are, I suspect, full EMA models.
You're amazing! Am I correct that these can be used in place of the corresponding 5.71GB control net models on Hugging Face? Or do we still need those?
you can use these instead
Thanks! Correct, you can use these in place of the 5.71 GB files, and they produce exactly the same results.
@theally Fantastic! Thanks a TON for doing this! I've been wanting to dive into ControlNet but simply don't have the drive space to support those huge models.
Everytime I try to use controlnet on my m1 mac i get this warning-
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
please help
Hey! I'm sorry to hear this, but I have zero knowledge of Stable Diffusion on Mac, I'm afraid. Perhaps someone with more experience can jump in here to help!
Launch webui with the --no-half argument
Gah, and I just cloned the unpruned repo last night. lol
Only ~45 GB, no big deal :)
Early bird gets the worm buts its too fucken fat to be eaten .
What are the difference between these and the default controlNet models?
The Neural Net code has been extracted, much like pruning a normal checkpoint model. They still produce the same results as the 5GB versions, they're just a lot smaller.
So, this is like the light version (pruned) of the original Controlnet models? (which consumes a lot of space!)
My hard working ssd salutes you!
Hug Thanks! These allowed me to work with controlnet, I cannot use normal r depth they have memory issues but the others are working. When I saw original size I figured I was out of luck.
Overall, these work better for me than the "difference" models. I have tried a variety of Denoise/Weight combos and models too, so 🤷🏻♂️
It's personal preference, for sure. They're just... different :)
@theally Now the real question would be, is it possible to merge these too? I think merging depth, canny, and OpenPose "should" considerably improve outputs. Right? :)
@Jaxx We were just discussing this - only one way to find out!
@theally This has now been implemented. Just update the extension to the latest version, Go to Settings > ControlNet > Multi ControlNet: Max models amount (requires restart)
and you can set the slider from 1 to 10!
Apply settings and restart (with cmd). Now you can see several ControlNets to be applied in sequency. Have fun!? :)
Any hints why I got this error while using 'depth' or 'normal_map' as the preprocessor:
AssertionError: Controlnet is enabled but no input image is given
I have tried different input images, still not working. Other options are fine. Even the 'depth_leres' can generate a depth image. I really have no clue what could go wrong😕
Very odd! Sorry you're having an issue. Is your ControlNet updated to the most recent version?
@theally Thanks for your response. It's because of my corrupted annotator files. Due to my bad networking, maybe? Someone has mentioned similar issues here: https://github.com/Mikubill/sd-webui-controlnet/issues/152 After re-downloading the dpt_hybrid-midas-501f0c75.pt file, nothing bugs me anymore.
Again, huge thanks for your efforts!
@Liuhao2b Excellent! Glad you got it sorted! Thanks for the update - this would have bugged me! :)
I had issues with depth and normal I found reloading again seemed to fix depth. Seems anytime it gets mixed up maybe too many etra elements loaded between ckpt, upscalers, controlnet models etc. it needs a forced refresh.
Loving the results I think scribble is my fav so far.
Quick question what model should be used in Txt2Img? It seem all model working fine in IMG2IMG but not doing well for Text2IMG ones . Or those awesome features only work for IMG2IMG only ?
All the models I've uploaded here work in the txt2img tab just the same as in the img2img tab. Just make sure you're choosing the correct pre-processor.
@theally Thanks for info . I`m using Colab ( and symlink to load pre processor model from another Gdrive accounts ) so maybe my setup not correct and caused those problems. Because I`m always getting blank second image in Text2IMG . In img2img everything seem normal and perfectly loaded . Gonna try fresh install Automatic1111 later / Thanks again
THIS. This is what a hidden treasure is like. We're another step away from bad hands and a more precise gesture to our imagine. THANK YOU.
I'm confused. Why download this instead of the official pruned one available via the huggingface page? What am I missing here?
Two reasons; when I posted this, there was no official pruned version. I extracted from the 5GB model and posted here prior to pruned versions going up on HF.
Second, lots of folk don't like to use HF - their interface leaves a lot to be desired, and I guess, lastly - for visibility. ControlNet is a game-changer for SD!
For some reason the pruned ones on Huggingface did not work for me, but theally's models worked. So I'm very glad she did it and shared it with us :)
I still can't find the HF pruned files, so I guess I will continue to use these.
@kinkmasterg kohya-ss/ControlNet-diff-modules at main (huggingface.co)
posted same day and everything. I think Ally is the AI now.
any chance to extract also the depth_lres version?
depth_lres isn't a model just a preprocessor
Can you do the same extraction for 2.1 or not possible ?
No, sorry, as of right now ControlNet is only for SD1.5
doesnt sem to work for me, cant get it to do the pose im giving to it
I have difficulties in installing these models and do inference on my computer, does anyone has any advices?
@maxwellcarey7777 I have a full tutorial on Patreon. Do you have the ControlNet extension for WebUI, and the models are downloaded and installed in the stable-diffusion-webui\extensions\sd-webui-controlnet\modelsfolder?
@daquanlanamer128 that's not very nice 😭😂. If you don't want to join my Patreon you don't have to.
hey, is there any known way to get this running on 4gb vram? my computer always hangs when trying to load controlnet
if you already checked the lowvram in controlnet, try add command --medvram in sd_user.bat, but it will increase more time to generate images, you can see details in Github-AUTOMATIC11111-issues page.
@alley71267814
thanks for the protip! yeah, got everything working now, thanks.
So, do you just unload it on the same folder with all those .pth files?
Correct, you don't need the big .pth files once you have these.
@theally Thanks, some of us are real noobs just going with the flow and instructions about these things
@paulonunesbazilio729 All good! I have a really in-depth ControlNet tutorial on the Patreon, if that'd be something you'd be interested in.
This is not a plug-in for QiuYe One Button Package
It doesn't claim to be. Make sure you read the information in the model card for additional information. This is a widely-known and widely-accepted model set for creating some great images. You will find it to be very useful if you read the information and follow the directions. I hope this helps.
@elldreth get it
When I am running openpose it starts downloading some files why is that ?
you are the best!! i cannot use the original controlnet but with this i can make images!!
Has anyone tried using this with InvokeAI?
when i used contronet max res i can get just 256 on 6gbram, 512 gave me OOM. i used medvram and xformer. any workaround?
Which model are you using? You mean you're upping the Annotator Resolution?
Thanks. You are a lifesaver for me and my small GPU.
please please make v2.1 models. 2 people have done it but their models are both broken
so i downloaded this and placed it in my models extension folder but all i see is pretrainedcanny, i dont have the open pose one. I really just want the pretrained open pose model, where is it located?
Civitai arranges model versions (in this case, the different ControlNet models) along the top of the screen, under the title.
i saw it after i posted.... thanks for the reply tho... also love your works!
Will this be updated for version 1.1 ?
Thanks. Me and my computer thank the sir!
The Ma'am, pls :) Also, you should download ControlNet 1.1 models instead - https://civitai.com/models/38784/controlnet-11-models
@theally dang you a girl... sup ;)
Details
Files
controlnetPreTrained_hedV10.safetensors
Mirrors
controlnetPreTrained_hedV10.safetensors
controlnetPreTrained_hedV10.safetensors
control_hed.safetensors
controlnetPreTrained_hedV10.safetensors
control_hed-fp16.safetensors
hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors
control_hed-fp16.safetensors


