This is an experimental collection made of LoRAs trained on 1 image each. They are obviously overfitted, but this is the intended result.
Be aware that results are frequently unstable, so use or don't use trigger words accordingly. Don't hesitate to change the rate of LoRA or any trigger word if you are using one.
The base model is Suzumehachi.
UPDATE:
The workflow.
The way I create it is as follows. I find an interesting picture and either make it square or manually crop it into one or two interesting pieces. Then I use BLIP (sometimes with DeepDanbooru) to caption them and check if everything is ok with that. After that, I usually add a special trigger word just in case (to give some punch to LoRA if it will struggle in an unusual prompt environment)
I use kohya with the parameters (only the most important ones are shown):
number of steps for dataset image - 100-200
--network_alpha="128"
--text_encoder_lr=5e-5
--unet_lr=0.0001
--network_dim="128"
--lr_scheduler_num_cycles="1"
--learning_rate="0.0001"
--lr_scheduler="constant"
--train_batch_size="2"
--mixed_precision="bf16"
--clip_skip=2
--noise_offset=0.1
--min_snr_gamma=5
Because of my other workflow, I make 4 variants of a LoRA with different seeds and combine them pair by pair with around 0.7 mult. factor:
python.exe "networks\merge_lora.py" --save_precision fp16 --precision fp16 --save_to combined_lora.safetensors --models lora_1.safetensors lora_2.safetensors --ratios 0.7 0.7
If something goes wrong I recombine them with different mult. factor or exclude some LoRA-outlier.
Description
FAQ
Comments (8)
Can you make them smaller? Compressing 1 image in 144Mb is not very efficient 😎
As I said it's more like an experimental collection, so space efficiency is not really my concern at this point. Also, I like my LoRAs to be the same dimension for easy mixing (I know you can resize it, but it's another unnecessary step for me)
What I am trying to say this is more about showing people that you can indeed make decent LoRAs styles with only one image in your dataset.
@dobrosketchkun you know you can transfer the style of a single image with controlnet now?
@_1_ I've heard that results are subpar, but as far as I'm concerned you do you, I'm here for the fun of experimentation and broadening the field so to speak.
this is awesome. would love to know the parameters for the training so i could try it myself (repeats, epochs, scheduler, lr, etc..) thank you!
Add it to the description.
do you have any example json file ?
At this point, I don't think so. Just use the parameters I've mentioned in the description.










