STOP! THESE MODELS ARE NOT FOR PROMPTING/IMAGE GENERATION
These models are the TencentARC T2I-Adapters for ControlNet (TT2I Adapter research paper here), converted to Safetensor. These are optional files, producing similar results to the official ControlNet models, but with added Style and Color functions.
Note: these versions of the ControlNet models have associated Yaml files which are required. The models won't load without them.
These models are embedded with the neural network data required to make ControlNet function, they will not produce good images unless they are used with ControlNet.
The original version of these models in .pth format can be found here. BUT YOU DO NOT NEED THESE .pth FILES! The files I have uploaded here are direct replacements for these .pth files!
Download these models and place them in the \stable-diffusion-webui\extensions\sd-webui-controlnet\models directory.
Please consider joining my Patreon! Advanced SD tutorials, settings explanations, adult-art, from a female content creator (me!) patreon.com/theally - I also have a write-up of ControlNet and will be updating regularly with the latest news/developments!
Description
There is very little info on what this is or how it works yet
FAQ
Comments (41)
What's the difference between these and the other net models?
So... they're created by Tencent's Applied Research Center (https://arc.tencent.com/zh/index) - the Color and Style models are completely new concepts, you'll need to update your ControlNet to be able to use them (I am still having problems with Style), but the rest of the models are all TencentARC's version of the OG models - you'll have to read about what they did here (https://github.com/TencentARC/T2I-Adapter#-how-to-test) - it's complex stuff.
@theally and are there differents between this models and the one that you uploaded before that already differed from base ones? Difference Models ones. They are not better or worse, just all differ?
@drakmour If they were exactly the same Tencent wouldn't have created them, and I wouldn't have uploaded them :)
From my tests they are worse then OG models, harder to control, sometimes different parts "leak" and creates new things that was simple not in the image (maybe cuz of beta status), so I suggest just getting color + style models for the new features.
@temp 100%, definitely agree.
@temp thanks for the warning. Style looks interesting.
They are also faster though. They make a big difference to me, I have a 6GB card, and generally I cannot use controlnet with LoRas using medvram. I have had to use lowvram. But with these I can use controlnet + LoRas with medvram, which gives me quite a boost in speed. The only downside is that the openpose model appears not to be able to use the hands which are part of openpose dw.
One question, is there no soft edge T2I model?
I got them "running", but it appears to make no difference. How to properly use them? What about the preprocessor? Leave empty?
You use them just the same as you would the OG ControlNet models - Depth uses the depth preprocessor, etc. The new model, Color, uses the Color preprocessor.
@theally My ControlNet doesn't give me a color preprocessor even though I have the most recent version according to A1111's Update Checker.
@BlankFX I heard the update checker was broken - I know someone who thought he was up to date, but was missing all the latest features. Do you have the check boxes for Invert Input Colore, RGB to BRG, Low VRAM, and Guess Mode?
@theally From the Gtihub page, the latest merge was 2 weeks ago. ControlNet, however, had an update yesterday. @BlankFX to get the latest version, you'll need to go to the Extensions tab in your webui and check for updates. You can also update it from the terminal, but you'd have to manually restart the webui if you do that.
Style adapter doesn't seem to work with "--medvram".
Ah, interesting. I haven't been able to get it to do much at all!
@theally Did you go over 75 on prompts? Looks like it fails then as well. Under 75 works.
@kaali111 Will have to try! Thanks
@kaali111 Just to clarify for everyone else, this is 75 prompt tokens, not steps or anything else - keep your prompts below 75 tokens and it will work.
If it doesn't do anything download the .yaml files as well
I cant find the yaml files
Thanks a lot for converting them to safetensors! :D
They work the same as the pickled ones, don't they?
can any one share a good examples of style model usage? I'm trying to make something good, but be honest outputs looks kinda strange to me.
Style is awesome, perhaps one of the best ControlNet features! So your prompt has to be less than 75 tokens or it won't work, but yeah, add a floral input pic, get flowers in your output. Add a skull with horns, get nightmare details added to output, etc.
@theally got it, thx Ally.
How do I get ahold of the clip vision preprocessor by itself? I need it in order to use these with ComfyUI.
Please add Scribble
Propose to add CoAdapter
Can someone show me how to use the seg model?
1. Pick the image that you want to "segment" into ControlNet
2. "Preprocessor: Segmentation "
3. "Model: t2iadapter_seg_"
For T2iadapter, _sd14 mean for SD 1.4 and sd15 mean for SD 1.5 I think.
or if you want some composition stuff in your generated image check out "sd-webui-latent-couple"
@SnG17 thank you!
Does anyone know the difference between openpose and bodypose?
Hi I need some help please.
I recently downloaded t2iadapter and activated multicontrolnet. Now everytime I want to use controlnet (style mode) it starts downloading pytorch_model.bin from https://huggingface.co/openai/clip-vit-large-patch14/tree/main. The download fails and I have to restart.
I tried deleting the cache but didnt help, i read on github you should download the file manually, which i did but i cant seem to find the right folder to put it.
You should use the name convention provided by controlnet 1.1. For example
T2IAdapter_v11p_sd15_color.safetensors
T2IAdapter_v11p_sd15_color.yaml
If you follow this name convention it will appear automatically on the model list on Automatic1111 controlnet ext when you click T2I-Adapter
Is there a soft edge model too?
is this the same lineart model but smaller?
\stable-diffusion-webui\extensions\sd-webui-controlnet\models
Just in the root?
Hi, on HF there is also the model for "zoedepth". Can you add that as well? Also how did you get all the yaml files for these ControlNet models? I can't find them anywhere on the official site.
I am trying to use t2i adapter more in forge ui but all I can find is models ... never preprocessors, what do I do? How do I get preprocessors to go with my models?
dito. it is driving me INSANE. who releases it that way?
