Shimabara_Yuuhi (Neo_ranga)
option
<lyco:shimabara_yuuhi_lora-weights:0.25:1.5>
(<lyco:shimabara_yuuhi_spmrgr01_lora-weights:0.8>)
OR
Separate UNet/Text Encoder weights (sd-webui-additional-networks)
UNet Weight 1.5
TEnc Weight 0.25-1.5
OR
<lora:shimabara_yuuhi_lora-weights:1.5>
(<lora:shimabara_yuuhi_spmrgr01_lora-weights:0.8>)
sd-webui-additional-networks
https://github.com/kohya-ss/sd-webui-additional-networks.git
Used embeddings: NG_DeepNegative_V1_75T [1a3e]
https://civarchive.com/models/4629/deep-negative-v1x
extract_lora_from_models.py
https://github.com/kohya-ss/sd-scripts/blob/main/train_network_README-ja.md
Lora saved differences from files studied by DreamBooth using extract_lora_from_models.py.
sample prompts
<lyco:shimabara_yuuhi_lora-weights:0.25:1.5>
shimabara yuuhi,
blue black hair,short disheveled bangs,jagged cowlick,long locks,medium hair,thick eyebrows,raised eyebrow,squinting,half opened one eye,sharp yellow eyes,small breasts,parted lips,
wet clothes white loincloth,white sarong,between legs white apron,white ribbon forehead hachimaki,wet clothes white criss-cross halter neck,
girl tribal red tattoo swirls spin pattern all over body,red body markings,bodypaint,tattoo,full-body tattoo,body writing,chest tattoo,red facepaint,red facial tattoo,arm tattoo,leg tattoo,shoulder tattoo,tribal,
golden anklet,golden armlet,golden bangle,golden bracelet,golden choker,
sample Negative prompts
(worst quality:1.4),(low quality:1.4),(bad_prompt_version2:0.8),
,short legs,short thighs,parted bangs,blunt bangs,bobbed bangs,extra eyebrows,pointy ears,
Description
First release Test version.
FAQ
Comments (3)
hello, very good, your loRA wanted to know how you do it, do you have a link where you learned from, thank you very much
Hello Mudo
This model is the difference between the original model and the one trained by DreamBooth and converted to Lora.
I learned about DreamBooth training from the following site
https://github.com/d8ahazard/sd_dreambooth_extension
http://dskjal.com/deeplearning/dreambooth.html
https://github.com/kohya-ss/sd-scripts/
We trained 200000steps against AnythingV3.
About 750 clips of celluloid pictures that I bought and collected in Japan Nakano and Akihabara over the past 20 years were used as materials.
It is trained with noflip and bf16.
For the classifiers_concept image, I tried several times but it did not work.
We are using 55000steps which looks good out of 35000steps,55000steps,85000steps,100000steps after comparing the snaps for every 5000steps.
learningRate:1e-5
steps:200000
mixed_precision:bf16
Instance Token:shimabara yuuhi,[filewords].
Class Token:[filewords].
Then, using extract_lora_from_models.py, run
(AnythingV3baseVAEshimabarayuuhi200000noflipbf16_55000.ckpt)-(Anything-V3.0.ckpt)
to Lora.
@neoranga interesting, thank you idol

