A bust slider LoRA for Klein. It changes the bust's size. It also works on non-mannequin busts.
For mannequin busts, the LoRA weights range from -8.0 to 8.0 for coherent results.
While there should not be much of undesired effects, Klein still has its own flaws. With some seeds the model can be reluctant to apply the effect of the LoRA, with some other seeds it'll have no issues.
Hope you like it!
Description
FAQ
Comments (15)
Hello, this is truly impressive work. May I ask under which framework you conducted the training? Furthermore, how was the training data structured?
I'm using Ostris's AI-toolkit, his video tutorials contain all you need to start training!
@l226 Okay, thanks for the reply
@l226 Hi, you mean the Qwen YouTube tutorial he made applies to Klein as well?
@jeremyhola I odn't remember which one I watched exactly, but the videos are a good starting point to understand how LoRA training works with his tool, there's not much difference between models!
@l226 gotcha, thanks for the great work!
Hello, may I ask you about the parameter about training, the batch_size is 4or1?And the Lr and Gradient? Thanks for your reply!
It's really annoying how tightly some people keep their training data. They think this is a real business. Like... You make shit that's like NFTs that guys use to jerk off, get over yourselves.
@naken97184823dsdsdsdsd
1) I have a life sorry if I missed one comment.
2) I have answered a ton of these questions about how I train models whether in comments or direct messages.
3) You are here so I guess you are the NFT guy?
4) All my LoRAs are public and free, not sure what you mean by business here?
5) I don't keep my "training data" tightly, I use Ostris ai-toolkit which is well documented, I don't have a "secret sauce". Your message sounds like you think you can't make LoRAs because the people who do prevent you to do so...
@FrankLew sorry I missed your comment. Batch 1! All my LoRAs are trained with batch 1. Sliders usually require LR 1e-3, normal LoRA 1e-4 never changed gradient.
Key for sliders is the wording, not dataset.
I see its also fixing background :)
Needs the blocks trimmed, or retrained while adjusting the params every some steps.
@naken97184823dsdsdsdsd Do you have a fast way setup to test blocks?
@addrain I don't! But... It doesn't take that long.
Use Realtime Lora, generate and image and it'll tell you which blocks it used by color. Red is not at all. Green is most, blue is inbetween. Turn everything but the green blocks off. Ok, now you have a good start. Your lora should still work for most of the effect but even now some side effects should be gone.
From there, for Klein and a lora I tested, it was only actually using three blocks.
So now, I could train that lora using only those three blocks in aitoolkit (not sure if this is faster because it does less work or not). Or, I could manually adjust the remaining layers and keep regenerating.
For Klein, it's like 20 layers. For Qwen it was like 60.
I find this LoRA very easy to use and quite accurate with what I want to generate ! It's very easy to go for the size I have in my head, instead of fighting the checkpoint 1v1 mid with a fish stick as my only weapon. Good job and thanks a lot !
