CivArchive
    Fame-Girls Ella - v1.0
    NSFW
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    Ella Pasjakina, a (now retired) softcore model from the Fame-Girls studio. Trained on images of her in her early to mid 20s.

    Try starting with a LoRA weight of 0.7 and a trigger word weight of 1.2 and adjusting accordingly.

    Description

    FAQ

    Comments (2)

    Bit_ShifterDec 21, 2023· 9 reactions
    CivitAI

    The body shape/type are right, but the face looks nothing like her. The jaw, chin, eyes and everything else are off. Looking at the LoRA data, I believe I see why.

    A couple of suggestions:

    Create a second folder of training images. Put 10 close-up, high resolution images (face only) in it and treat it as a second concept but use the same trigger and class as your first training image folder when naming the new folder. Name the folder the exact same as your first images folder, except change the repeat number to, in this case, 50. This will force SD to train exclusively on her face for a while. But since the trigger and class are the same, the additional trained face data will be applied to full-body images as well. And since the total step count is about 10% lower than the total step count for the first image folder, the finished LoRA won't exclusively produce close-up images (unless prompted).

    Generally you want the number of face images to be roughly 25% of the number of primary training images. To determine how many repeats are needed for the face image folder...First, determine how many total training steps your primary image folder will produce (43 images x 13 repeats=559 steps). Now determine how many repeats are needed to get 10 close-up face images to train close to 559 steps...(559 steps / 10 images=55.9). In this case, 56 repeats on 10 images would produce nearly equal training as 13 repeats on 43 images. But we don't want equal training to cause the LoRA to produce close-ups. So we subtract 10% from 55.9. That gives us 50.31...or, ultimately, 50 repeats for the close-up folder. This is the trick to more accurate faces on full-body images.

    Another suggestion, using dAdaptation is great for not needing to fine tune learning rates, etc. However, another reason there is very little likeness is because dAdapt may not have had enough time to learn fine details if it is starting at lr=1, unet lr=1, te lr=0.5. The starting learning rate may need to be decreased. SD will never learn any fine details if it is being rushed through the training data by dAdapt. I would recommend using another optimizer or find a way to put the brakes on dAdapt.

    Your repeats for your training images was set to 13. This may be too low if dAdapt isn't lowering LR aggresively enough.

    Lastly, a larger batch size isn't always better. Six in this case is overkill. Lowering the batch size from 6 to 3, increasing gradient accumulation steps from 1 to 3, and then increasing your resolution size from 512x512 to 768x768 would all be ideal. You would be lowering batch size to lower VRAM usage. The freed VRAM will allow increasing the training resolution. A higher training resolution will cause the LoRA to produce exponentially better results w/o relying so heavily on Hires.fix and Adetailer. Training time will not suffer much because of the change to gradient accumulation steps...but when doing this, you will need to get the LRs down. For 768x768, LR and unet LR would need to be somewhere around 0.0001 to 0.00005. TE LR would need to be half of whatever LR you settle on. Your repeats per concept would likely need to increase to somewhere around 40+, possibly as high as 80.

    Sorry for the information overflow. Hope it helps.

    baddrudge
    Author
    Dec 21, 2023

    Thanks for the suggestions. I'm unfortunately stuck with DAdapt for now since I use the Linaqruf trainer which works on Colab but their recent changes caused the other optimizers to fail (but DAdapt has been my go to optimizer prior to this.) I've only tried tweaking with the DAdapt settings for now and it seems halving the initial learning rates seemed to have a small but noticeable improvement (0.5 and 0.25.) I did try an even smaller initial learning rate at 0.1 and 0.05 but the LoRA barely learned anything. I'll post an update soon.

    LORA
    SD 1.5

    Details

    Downloads
    155
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    12/18/2023
    Updated
    5/13/2026
    Deleted
    5/23/2025
    Trigger Words:
    fgella

    Files

    fgella_obj20.safetensors

    Mirrors

    CivitAI (1 mirrors)
    TensorFiles (1 mirrors)