I don't recommend high weights.
Setting the weight high rather breaks the balance.
"Shape balance correction" was created to create a sense of shape while using v2.0.
It is not recommended to use "Shape Balance correction" alone without v2.0.
Version 1 is color,
Version 2 is drawing style.
I recommend using only one of versions 1 and 2.
Description
Must be used with v2.0.
FAQ
Comments (8)
No (edit: direct) comparison image - no download. That's the unfortunate reality, sorry.
ok
Good day sir/miss, is it normal for the embedding to alter the weight of the model? using only this negative prompt I generate 2 completely different images, it "improves" the color balance but for example worsen the hands and the eyes. I'm baffled about what went wrong here.
Cheers.
The difference between Lora and embeddings is that Lora has the property of an adapter that transforms from the middle, but embeddings directly touch the block weights of the model. I used a one-sided dataset, so I may have unintentionally weakened even the necessary block weights.
@colaisdelicious 'Ello mate! "Wow a month went by, much time" Ignore all that babbling below, I did research and edited:
~~That means embeddings starts at step 0 to the end, while loRA (at least on comfy) go from step 0.5 to the end if not second pass is used to drop it?
So if I use a embedding just for the second pass than would mean the latent image will suffer "a lot" and possibly give me deformities?
My idea is the opposite: using the embedding on the first pass and drop it on the second/refiner but it may be a case by case...
Does adding manual weight works? e.g: ( embedding:negative :1.3)~~
Edit: So to put it simple embedding are just "prompt clusters" that are sent before and or separate from the tokens corresponding to the "Negative Cliptextencode" and their weight can be altered as a group with the correct format of the model you are using, my case SDXL "( embedding:negative :1.3)", meaning that the model would be affected as if I introduced those prompts on a separately cliptextencode and used conditioning concat to send them both into the Ksampler, Am I right? 🤔
@cookiechainsawxx LoRA is a method that directly modifies the parameter weights of a checkpoint model, i.e., the base pre-trained model. (This is why the compatibility of LoRA varies depending on the checkpoint. The more fine-tuned a model is, the more its parameters have been altered.) However, even if these parameters are reversed, the results are not exactly flipped. If we assume this is applied to the negative, the parameters that represent the opposite of this token need to be defined. While embeddings act as a type of "TOKEN," LoRA functions as an "ADAPTER." On the other hand, embeddings apply vector values extracted from images to the model by encoding them. While the results may not be as smooth as those of LoRA, embeddings place less strain on the model.
@cookiechainsawxx I'm not sure if this is the answer you were looking for.
@colaisdelicious Indeed, thanks a lot for taking the time to answer, have a nice day 👍🏻
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
