This LoRA generates images based on the style of Itaru Hinoue's artwork for Tactics/Key's 2000s-era visual novels, specifically MOON.
I've tested it with a few models, including AOM2 and Anything 4.5, and it works well.
This LoRA is fantastic for getting a 90s or early 2000s aesthetic into images. You can control the strength and merge it with other models to get the perfect mix of styles.
Notes
There are no specific tags to invoke the style, like my other LoRAs. However, this includes tagging to invoke specific characters.
Description
This was trained on 187 images with 5 repeats for 15 epochs, at a batch size of 4.
The dataset was tagged with the following, alongside character descriptions. However, they are optional.
amasawa ikumi, amasawa ikumi (track), amasawa ikumi (young), amasawa miyako, kanuma youko, kanuma youko (pajamas), kanuma youko (young), mima haruka, mima ryousuke, nakura yui, nakura yuri, shounen, shounen (school uniform), takatsuki
FAQ
Comments (13)
Very nice! how long did it take you to train these?
v0 took about 10 minutes. v1 took an hour. Using a 3090Ti.
@10komi No way! After hearing your words, I decided to buy a new GUP.
By the way, what is the LoRA learning retopology used for training? I already have training images, but I'm not sure which is the best LoRA training retopology. (I'm using SD WebUi.)
Do you have any recommendations?
@misora Sorry, I don't know what you mean by retopology.
These training settings were used:
network dimension 128,
alpha 64.0,
scheduler cosine_with_restarts,
warmup ratio none,
learning rate 5e-05,
text encoder lr/uner lr are null (not used),
batch size 4,
number of epochs 15,
training resolution 768,
shuffle captions off
Using these scripts for training (I haven't tried the WebUI training for LoRAs yet.)
https://github.com/kohya-ss/sd-scripts and https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
LoRA weight: 1, model: moonLora_1(80fce839)
Error running process:
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\scripts.py", line 347, in process
script.process(p, *script_args)
File "C:\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\additional_networks.py", line 238, in process
network, info = lora_compvis.create_network_and_apply_compvis(du_state_dict, weight, text_encoder, unet)
File "C:\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\lora_compvis.py", line 73, in create_network_and_apply_compvis
network_dim = min([s for s in size if s > 1])
ValueError: min() arg is an empty sequence
Is this using the latest version of sd-webui-additional-networks?
Great lora, did you use Regularisation images for this style? if so did you generate them yourself with a prompt?
Regularisation images were not used.
@10komi one of your others has 800+ images, any upscaling involved? must have take a long time to gather, I guess you didn't do much manual captioning, thanks for answering.
@kharminarts No upscaling was used because the size of the source images was higher than 512^2. Gathering datasets is tricky but partly automated. Deepbooru and WD auto tagger were used for larger models. For this LoRA the tags were curated by hand.
@10komi nice I think I have a good idea of how to go about my first lora now, but one more question, you cropped manually or used "bucket" ? looking foward to more of yours.
@kharminarts Bucket mode was used.
a blast from the past, thanks!!












