CivArchive
    Nice Nature (umamusume) - LoHa
    NSFW
    Preview 254417
    Preview 254419
    Preview 254421

    LoHa of Nice Nature (umamusume). Put LoHa file in your stable-diffusion-webui/models/lora folder and write LoRA notation in prompt to applying LoHa. <lora:your_loha_name:weight>

    All of my works is free. If you'd like to support me, feel free to buy me a coffee. šŸ‘

    Recommend options

    • LoRA weight 0.7~0.8

    • Trigger words

      • base

        • nice nature \(umamusume\)

      • Racing competition costume :

        • ear covers, green bowtie, diagonal-striped bowtie, striped puffy sleeves, puffy long sleeves, juliet sleeves, grey shirt, double-breasted, buttons, black dress, frilled dress, pinafore dress, thigh strap, o-ring, red socks, brown footwear, knee boots, cross-laced footwear, lace-up boots, kneehighs

      • Cheerleader costume :

        • official alternate costume, cheerleader, ponytail, long sleeves, crop top, white shirt, sailor collar, midriff, navel, blue jacket, open jacket, belt, layered skirt, pleated skirt, white skirt, miniskirt, thigh strap, white socks, red footwear, sneakers, shoes

        • to remove pom poms, use tag with hand.(ex, hand on hip)

        • (optiona) shorts under skirt, short shorts, orange shorts

          • but adding details for same location is hard for AI.

      • Casual clothes :

        • casual, ear covers, long sleeves, wide sleeves, red shirt, white shirt, denim skirt, handbag, shoulder bag

      • Swimsuit :

        • water, wet clothes, blue swimsuit, blue one-piece swimsuit, wet hair, poolside, competition school swimsuit, wet swimsuit, single vertical stripe

      • Tracen school uniform(summer) :

        • tracen school uniform, purple shirt, pleated skirt, puffy short sleeves, white skirt, puffy sleeves, summer uniform, frilled skirt, sailor collar, sailor shirt, miniskirt, frills, white thighhighs

    Use a model derived from or mixed with animefull model.

    • Settings

      • Stable Diffusion WebUI by Automatic1111

      • DPM++ Series(SDE Karras, 2M Karras, etc.)

      • about 20 steps, CFG scale 3.5~6.5

      • I recommend kl-f8-anime2.vae

      • CLIP skip = 1

      • Use Hires.fix to get higher quality image

        • Up-scaler

          • Latent, Latent (nearest-exact), Latent (bicubic antialiased) or other Latent series.

        • Denoising strength 0.50~0.65

    LoRA training info

    • trained on sd-scripts by kohya_ss.

    • based on Animefull-pruned model

    • 169 images of nice nature

      • 4x low quality 57 images

      • 8x medium quality 49 images

      • 16x high quality 63 images

      • => 1 epoch = 1628 images

    • replaced character feature tags with nice nature \(umamusume\)

      • 1girl, twintails, bangs, blue eyes, streaked hair, horse tail, grey eyes, green eyes, red eyes, horse girl, solo, brown hair, hair between eyes, simple background, multicolored hair, two-tone hair, tail, tail through clothes, medium hair, animal ears, yellow eyes, brown eyes, horse ears

      • I think these tags are

        • the essentials of nice nature

        • or unnecessary for training because of duplication.

    • resolution : 768x768 with aspect ratio bucketing.

    • other training settings is included in training_data.zip or metadata in LoRA file.

    All uploaded images are generated by AOM3A1

    And, special thanks to my best friend. DH.

    Description

    LoHa version. Please install this extension on WebUI

    or a1111-sd-webui-locon in extension on Available tap. (both are same)

    FAQ

    Comments (15)

    allcolor0609179Mar 15, 2023Ā· 1 reaction
    CivitAI

    Question, I can't seem to get LYCORIS to load properly.

    I have a1111-sd-webui-locon installed....

    Lora works and I want to use nice nature!

    mht
    Author
    Mar 15, 2023

    Someone issued same too. Maybe it is because of version or WebUI or LyCORIS extension.

    But, I can't reproduce the same issue. So it might be wrong solution. Sorry 😢

    or put loha in webui/models/lora folder directly. Someone solve problems in this way

    allcolor0609179Mar 16, 2023

    @mhtĀ Thanks for getting back to me!

    Still doesn't work, so I'll do some research.

    mht
    Author
    Mar 16, 2023

    @allcolor0609179Ā Someone solve this matter uninstalling and re-installing locon extension. How about this?

    allcolor0609179Mar 16, 2023Ā· 1 reaction

    @mhtĀ I updated the whole SD and it worked!怀Thanks for all the replies! Please let me continue to use them!

    Yuii_miuMar 16, 2023
    CivitAI

    Hiii I really like your model but I can't get a perfect hand as you do >^< May I ask what you do to fix the wired hands?

    mht
    Author
    Mar 17, 2023

    Wired hands is the problem of Stable Diffusion model. I also hope the model fixed this matter will release. šŸ¤”

    Riyu25Mar 17, 2023Ā· 1 reaction
    CivitAI

    Hello there!

    First, thank you very much for your awesome work with your character models, and thank you for sharing them with the community. I was wondering, is there somewhere I can support your creations?

    Additionally, if you would be interested, is there somewhere we can privately discuss a possible opportunity? (I am in search of someone to study under regarding character modeling)

    mht
    Author
    Mar 18, 2023

    Hello! First, thanks for your kindness. If you interested in making a LoRA or model, comments in this will helpful. Some of my work process is in there. (Comments in power70521)

    Not yet, I don't have any idea to create channels to personally communicate. If you have any questions, please feel free to ask here. :)

    Riyu25Mar 18, 2023

    @mhtĀ Thank you very much for your reply!

    I understand, thank you for letting me know. If this ever changes, it would be an honor to study your technique (and you would be compensated for your help of course).

    Also, thank you for linking the previous thread and also for providing the training data for your models. I have been studying these and will continue to do so!

    If you wouldn't mind, the following are some questions regarding how you approach training in general. If you are able to answer any of them, I would be very grateful, but if you can't it's no problem!

    1. Is there a specific way you approach a new training? For example, do you usually use the same parameters for the first training run of various characters, or does it vary depending on certain factors? This question can exclude any discussion of tagging, as I'm learning that through the provided training data.

    2. Would you mind explaining how you evaluate the initial epochs to determine how to improve the model on further trainings? I understand that many trainers utilize x/y testing, though I'm wondering how you determine what changes to make to the next training as you test the current models. (I understand this can be a very broad topic, so no need to feel like you'd need to cover everything)

    3. When training, do you utilize the logging information much? If yes, is there anything you are specifically looking for in the log data?

    Thank you again, and I will look forward to your future models =)

    mht
    Author
    Mar 19, 2023

    @Riyu25Ā 1. The parameters are determined by my experience, size of model and optimizer. Usually, bigger model(in LoRA, rank or dimension) needs lower learning rate. And some optimizer require specific condition. In case of D-adaptation, It does not need to set learning rate, so just set lr 1.0. With everything I mentioned, I decide parameters. And these are independent with training data(characters). So, until model size or algorithm changes, I recycle these parameters I set before.

    2. First, I saw logs in tensorboard. I choose some model has local minimum loss peak. Many cases, three models are chosen. And then, with x/y/z plot, changing seed, epoch and weight, I generate images for selected models. Usually, I print five concepts of character of xyz plot, rate each images of concepts, select the best epoch model.

    3. Tensorboard has all of utilizing. It contains observing in real-time. To see log file, write that in console, tensorboard --logdir="YOUR LOG PATH"

    Riyu25Mar 19, 2023

    @mhtĀ Thank you very much for taking the time to reply, I really appreciate it!

    Riyu25Mar 20, 2023

    If you have a moment, would you also mind explaining if there is a benefit to training LoHa's with a clip skip value of 1? If I am not mistaken, I believe NAI's animefullpruned model has a clip skip of 2, so I would have assumed this was necessary to include for training.

    mht
    Author
    Mar 21, 2023

    @Riyu25Ā Yes, NAI's anime model trained on CLIP skip of 2. Here is why NAI use clip skip as 2.

    However, LoRA(or LoCon, LoHa) train whole layers of text encoder with small rank in SD model if you set clip skip as 1 in training.

    NAI model use clip skip as 2 because of time and quality. They need to train entire model of SD architecture.

    We have LoRA. Super simple and small injectable model. Whatever base model trained with clip skip of, we inject LoRA to train whole layers of base model(In my case, animefull model).


    So, it might be benefit using clip skip as 1 on training. And we don't need much of time to train. With clip skip=1, we can control full layers of text encoder. This is why I choose clip skip as 1

    Whether clip skip is 1 or not, image that have passed through the layer with LoRA will a intended thing.

    Riyu25Mar 23, 2023

    @mhtĀ That is tremendously helpful to know, thank you so much for the detailed explanation!

    LoCon
    SD 1.5
    by mht

    Details

    Downloads
    6,392
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/15/2023
    Updated
    5/11/2026
    Deleted
    -
    Trigger Words:
    nice nature \(umamusume\)

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.