Please react to my videos: https://civarchive.com/user/ntc/images?sort=Most+Reactions
Trained for 2000 steps
"#:0.04|highly detailed, sharp focus, intricate, smooth, elegant, 8 k, fantasy, cinematic lighting, cinematic, masterpiece, matte, photorealistic, 4 k, 8k, beautiful, volumetric lighting, dramatic lighting, detailed, realistic, intricate details, ultra realistic, high detail, centered, hyper detailed, 4k, 8K, hyperrealistic, hd:0.2"
https://github.com/ntc-ai/conceptmod
This prompt is the most popular prompt phrases from:
https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts
by usage.
animation:
https://github.com/ntc-ai/conceptmod/blob/main/lora_anim.py
This does not include any people, artists or websites so use it however you want.
Description
FAQ
Comments (14)
Can you explain in more detail exactly what this LoRA does and how to use it?
This lora represents the concept of the most popular prompts in https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts . To use it, use the trigger words and add the lora up to 2.5 . You can use it without the trigger words for decreased effect.
The lora is fine tuned by self play and does not use a dataset. It measures the internal representations of the phrase during training and trains the attention layers to look more that way.
@ntc That was as clear as mud. Are you an AI yourself?
Do we need to put in the entire list of prompts, in order? Because you specified all of them, in that order, as one trigger word.
@DeviantRava My understanding is that there is no trigger word because it effects the generation result prior to prompt even being taken into account. That means you CAN add the terms the LOAR was trained with to add even more emphasis, but then that brings in the issue of having prompt attention being spread thin across so many extra terms.
According to the sample renders, it appears to create giant penis. Seriously folks, do we really need to see a hundred images of a massive penis? Doesn't anyone else have some samples they can offer to show us what this LoRA is capable of?
@Talk2Giuseppe Instead of bitching about it here, take it up with the guy who uploaded all of them for not grouping them together properly.
Hampter ! good find
So I used this Lora + "masterpiece" would it be different than just adding weights normally like "(masterpiece:1.3)" for example. The biggest problem with just heaping weight onto a keyword is that each generation only has so much attention to go around, and if you weight things too heavily or have too many competing weights/tags, they often just override each other or fry the output.
So if this can circumvent that some how it would be great, but if it is just forcing more attention onto popular keywords being used in the same way as up-weighting them, I'm not sure if there is a real function.
I dont know how much different the approaches are but they are different. This training uses 2000 steps and moves the unconditional (empty string results). Just adjusting the prompt weight will move the conditional output (the phrase) with inference only.
Feel free to share if you figure it out.
This is exactly what you are describing: a way to not waste tokens and weights on commonly repeated concepts. So it would do something siomilar to having all those "masterpiece, etc..." token in the beginning, without having to actually put them in the prompt.
(btw, attention means something else in the context of nn)
@ntc: This is absolutely amazing! Thanks for making this. i'm loving all of your concept loras. Did you think about making a tutorial for them? I can see this becoming a popular way to finetune.
Having generated a couple hundred images each with and without it and seen no notable difference in results, I can confidently say that this "LoRA" does nothing whatsoever.
If you're trying to save tokens by wrapping up a bunch of common keywords in one, use an embedding instead.
dude the animations show it doing things
The "animations" show a bunch of generated images interpolated together. That does not prove this 3MB placebo actually does anything. Particularly since your generation data explicitly includes all the keywords this is allegedly supposed to condense and subsume.
Just like a couple of beginning-words?
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.