Trained on Hardblend and images generated on Juggernaut (it just works better with this combo). It can produce both SFW and NSFW results due to base model capabilities, so prompt wisely.
Instance token: cstm
Class token: woman
Description
FAQ
Comments (5)
600MB?? Damn! Man you should reduce your Network Rank (Dimension) and Network Alpha on training.
I found that reducing the Rank lowers the quality and flexibility of the resulting LoRA, but I haven't done enough experimentation between different configs to give some numbers. This is what I found works best, and I use a 0.6 LoRA weight on inference
@Uthrael Just a tip if you are using koyha try training up to 150mb, I get amazing results and flexibility with my models. 600mb is too much. I mean you get more flexibility but you are consuming too much space that you can use for 3 more models.
@Adeleine_ I train with Dreambooth but these are great tips, will experiment with lower ranks and check. Also I run everything locally so space is not an issue, but I see why lower sizes are important.
Sarah Morgan disliked that


