Use the tag / trigger phrase: cum on her face
Description
This is an undertrained (1500 steps vs 3000 steps) version of the LoRA.
FAQ
Comments (5)
- z_cum_on_her_face_000001500 -
I wanted to create a new character, with known controllable features, before I dived into testing this one. To that end I created Breen Becker. We think she may have been a super secret agent spy kinda thing and is now retired from that profession, but we'll be seeing a lot of her in her upcoming movies, and a TV show.
This is definitely leaning in a more versatile direction. It also has some interesting characteristics which I can't quite identify but I'm liking it very much.
I doesn't appear to be interfering with my character LoRA in any significant way.
Maybe the answer is I overcooked the first one, and I should have used fewer steps.
In regard to the offered question about previous LoRA being more effective than its matured counterpart...
It certainly makes sense that fewer steps will lead to learning less of something, so the model would be learning less of the personal characteristics, and possibly also less of the intended theme of course.
I wouldn't know how other people train LoRA, I can only read and test. I've tried everything I've read, even things that didn't make sense, because this entire area of study is mysterious, so I expect that some things will appear to be illogical but still work, even if for some unrelated purpose.
If it's true that a previous LoRA are better at something than its more mature counterpart then I guess we can assert that the counterpart has been "over trained". Though I've seen expression ranging from burned in colors to body horror, I haven't identified what it is that the community commonly believes about this subject. Though I'll provide a few characteristics, from my experience, below.
The idea of "over training" is illusive to me, I don't know that I've experienced it in the way that I've read people describe it. I've deliberately "gone too far", with many LoRA, after they've become useful, in order to identify what it means, what it looks like, to have this happen and, while I can personally identify areas that I didn't like about training with additional steps, I can experience the same peculiarities if I apply a similar critical thinking approach to the results prior. However, there are three things that I think I've discovered about this, two of which may be the exact same thing, and it's that the weight window of usefulness starts to shrink and deformities become more frequent, along with less effective prompt adherence.
So, maybe someone can clarify what it means to over train a LoRA, giving some testable characteristics, but I haven't been able to match the commonly provided description of this apparition with any experience I've had so far. What I did run into is "the fuzzies" and banding, which happened with Flux, but using an alternate approach to training removed these issues but I won't get into all that here and I don't see anything similar with Z-Image.
So rather than try to hit a narrow and delicate target I'd suggest a different training approach so that the concept can be learned in as many steps as is required in order to strengthen the target, without worrying about over-training. To that end I'd like to suggest, again, masking, but not the method that appears to be commonly used, when used.
Since this write-up has become another beast I'll be adding to it through an article.
has anyone had luck using this for img to img?
I haven't tried img2img with ZIT.
