Trigger word for prompt: detached foreskin
Detaches the foreskin and "peels (剝)" it back to about the middle of the penis
Check my images for prompts / LoRA combinations
LoRA strength best between 0.8 and 1.5 (see the "About this version" information)
Some sample images are upscaled with 4x-AnimeSharp and Ultimate SD upscale.
Generated using ComfyUI
Please share your creations so I can see how well it's working with other styles! :)
General Training Settings
LoRA_Easy_Training_Scripts to train the model
DatasetProcessorDesktop to crop and manage images
BooruDatasetTagManager to manage tags
See "About this version" for individual training details (right side)
v4.6 Training Details
LoRA_Easy_Training_Scripts to train the model (.toml available under version info tab)
DatasetProcessorDesktop to crop and sort images
BooruDatasetTagManager to manage tags
Trained on different artist, grouped into subsets
Only 700 Steps (got lucky?)
Size of penis may be difficult to control at times
v3.2 and v3.3 Training Details
Set of 222 images, Repeats 2, Batch Size 4, Epochs 40 total but it only made 38?
Best results around 14 epochs (-0014) = ~6214 steps of training?
It didn't seem too overtrained even after 40 epochs
LoRA Type: LyCORIS/LoCon for smaller file size
Lower Network Rank = Lower File Size
Prodigy Optimizer automatically adjusts learning rate during training
v2.0 LoRA Training Details
Set of 240 images, Repeats 5, Epochs 6
Instance Prompt: detached foreskin, Class Prompt: penis
LoRA Type: standard
Max Train Steps: 7200 (because = 240x5x6)
bf16 precision
SDXL enabled, resolution of 1024x1024
Buckets enabled, don't upscale resolution. Resolution min: 256, max: 4096
Optimizer: Adafactor, args: scale_parameter=False relative_step=False warmup_init=False
LR Scheduler: constant
Learning Rate: 0.000025, Text Encode Rate: 0.0001, Unet Learning Rate: 0.0001
Network Rank: 64, Alpha: 1 <-- mess with these settings later, they seem important
Saved every 1 epoch, best results around 3 epochs (-0003) = ~3600 steps of training
Doesn't affect style too hard, appears with low weight (0.5), can go to very high weights (2.0) without completely destroying image
(other settings didn't seem too important yet?)
P.S. This is my first concept (idk what im doing most of the time). If you have any tips or advice, please comment on this model or DM me. It will improve this model and future ones that I make.
Thank you!
Description
Recommended strength for this version: 1.0 ~ 1.2
Only took 700 Steps this time. I got lucky I guess. I think you can specify artist names and it'll cater towards their style more. Haven't tried it though. The penis size is kind of wack in this version as it is very sensitive and either makes it way too large or too small. Might be due to the cropped images in the training data?
I had good results around 1900~2000 steps, but it did impact the overall image style quite a bit. Updated training images and tags a bunch and finally got something I liked.
This time around, I used LoRA_Easy_Training_Scripts which is way easier than using KohyaSS directly. DatasetProcessorDesktop to crop and sort images and BooruDatasetTagManager to manage tags.
Here is the training TOML:
[[subsets]]
caption_extension = ".txt"
image_dir = "redacted"
name = "aya_shobon"
num_repeats = 1
[[subsets]]
caption_extension = ".txt"
image_dir = "redacted"
name = "canvassolaris"
num_repeats = 2
[[subsets]]
caption_extension = ".txt"
image_dir = "redacted"
name = "elfk"
num_repeats = 1
[[subsets]]
caption_extension = ".txt"
image_dir = "redacted"
name = "manzai_sugar"
num_repeats = 2
[[subsets]]
caption_extension = ".txt"
image_dir = "redacted"
name = "nonorigo"
num_repeats = 1
[[subsets]]
caption_extension = ".txt"
image_dir = "redacted"
name = "stickyspoodge"
num_repeats = 1
[train_mode]
train_mode = "lora"
[general_args.args]
max_data_loader_n_workers = 1
persistent_data_loader_workers = true
pretrained_model_name_or_path = "illustriousXL_v01.safetensors"
vae = "sdxlVAE_sdxlVAE.safetensors"
mixed_precision = "bf16"
gradient_checkpointing = true
gradient_accumulation_steps = 1
seed = 1337
max_token_length = 225
prior_loss_weight = 1.0
xformers = true
cache_latents = true
cache_latents_to_disk = true
sdxl = true
max_train_steps = 2500
[general_args.dataset_args]
resolution = 1024
batch_size = 2
[network_args.args]
network_dim = 32
network_alpha = 32.0
min_timestep = 0
max_timestep = 1000
[optimizer_args.args]
optimizer_type = "Prodigy"
lr_scheduler = "cosine"
loss_type = "l2"
learning_rate = 1.0
max_grad_norm = 1.0
min_snr_gamma = 5
[saving_args.args]
output_dir = "redacted"
save_precision = "fp16"
save_model_as = "safetensors"
save_last_n_epochs = 1
save_every_n_steps = 100
output_name = "detached_foreskin-v4.6"
[noise_args.args]
multires_noise_iterations = 8
multires_noise_discount = 0.4
[bucket_args.dataset_args]
enable_bucket = true
min_bucket_reso = 512
max_bucket_reso = 2048
bucket_reso_steps = 64
[network_args.args.network_args]
conv_dim = 32
conv_alpha = 32.0
algo = "locon"
[optimizer_args.args.optimizer_args]
weight_decay = "0.1"
betas = "0.9, 0.99"
decouple = "True"
d_coef = "1.0"
