Gal Gadot-Varsano (born 30 April 1985) is an Israeli actress and model. She was crowned Miss Israel 2004 and represented her country at the Miss Universe 2004 pageant. She then served in the Israel Defense Forces for two years as a combat fitness instructor, whereafter she began studying at IDC Herzliya while building her modeling and acting careers.
Description
FAQ
Comments (13)
I don't think a single one of yours has worked for me. Am I doing something wrong?
What base model are you using?
@parar20 Various. Usually the new JuggernautXL though. Rarely base SDXL.
@TheManTheMyth92 I train all SDXL models on sd_xl_base_1.0.safetensors and create the images on that along with SD vae with sdxl_vae.safetensors. In settings check and see what you are using for VAE.
@parar20 I got that VAE in the folder and I think I got the settings right? I turned it off automatic and set it to SDXL_VAE.Safetensors. Like, it's recognizably Gal Gadot but the results are still coming out crunchy and very fragmented. Base is better but not perfect. Nothing like the results you're getting.
I am using it with other loras.
@TheManTheMyth92 Try just with this prompt GalGadot,<lora:GalGadotSDXL:1> until you get good results. Do you have restore face turned on? I think it's some setting that's not set right. Also, are you using automatic1111 or some other tool? For me I get 90%-95% of the time I get good results.
@parar20 A1111 and I don't believe I do...I don't even know how to turn it on in the new A1111 version. I'll keep fucking round with it. Yours are the only ones I'm having problems with so I assume something is messed up elsewhere. Works fine with style or pose loras, not much with character loras.
Thanks for your help though!
@parar20 Oh! It's working waaaaaaaaaaaaaaay better now that I turned it on.
@TheManTheMyth92 Glad you made progress.
Hey, can you share your training config?
Here is the json data
{
"LoRA_type": "Standard",
"adaptive_noise_scale": 0,
"additional_parameters": "",
"block_alphas": "",
"block_dims": "",
"block_lr_zero_threshold": "",
"bucket_no_upscale": true,
"bucket_reso_steps": 64,
"cache_latents": true,
"cache_latents_to_disk": true,
"caption_dropout_every_n_epochs": 0.0,
"caption_dropout_rate": 0,
"caption_extension": ".txt",
"clip_skip": "1",
"color_aug": false,
"conv_alpha": 1,
"conv_block_alphas": "",
"conv_block_dims": "",
"conv_dim": 1,
"decompose_both": false,
"dim_from_weights": false,
"down_lr_weight": "",
"enable_bucket": true,
"epoch": 1,
"factor": -1,
"flip_aug": false,
"full_bf16": false,
"full_fp16": false,
"gradient_accumulation_steps": 1,
"gradient_checkpointing": false,
"keep_tokens": "0",
"learning_rate": 0.0004,
"logging_dir": "F:\\Projects\\Kohya\\dataset\\GalGadotLora\\log",
"lora_network_weights": "",
"lr_scheduler": "constant",
"lr_scheduler_args": "",
"lr_scheduler_num_cycles": "",
"lr_scheduler_power": "",
"lr_warmup": 0,
"max_bucket_reso": 2048,
"max_data_loader_n_workers": "0",
"max_resolution": "1024,1024",
"max_timestep": 1000,
"max_token_length": "75",
"max_train_epochs": "",
"max_train_steps": "",
"mem_eff_attn": false,
"mid_lr_weight": "",
"min_bucket_reso": 256,
"min_snr_gamma": 0,
"min_timestep": 0,
"mixed_precision": "fp16",
"model_list": "custom",
"module_dropout": 0,
"multires_noise_discount": 0,
"multires_noise_iterations": 0,
"network_alpha": 1,
"network_dim": 1,
"network_dropout": 0,
"no_token_padding": false,
"noise_offset": 0,
"noise_offset_type": "Original",
"num_cpu_threads_per_process": 16,
"optimizer": "Adafactor",
"optimizer_args": "scale_parameter=False relative_step=False warmup_init=False",
"output_dir": "F:\\Projects\\Kohya\\dataset\\GalGadotLora\\model",
"output_name": "GalGadotSDXL",
"persistent_data_loader_workers": false,
"pretrained_model_name_or_path": "F:/Projects/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion/sd_xl_base_1.0.safetensors",
"prior_loss_weight": 1.0,
"random_crop": false,
"rank_dropout": 0,
"reg_data_dir": "",
"resume": "",
"sample_every_n_epochs": 0,
"sample_every_n_steps": 100,
"sample_prompts": "GalGadot, portrait photo of a woman --w 1024 --h 1024 --l 7 --s 50 ",
"sample_sampler": "euler_a",
"save_every_n_epochs": 1,
"save_every_n_steps": 0,
"save_last_n_steps": 0,
"save_last_n_steps_state": 0,
"save_model_as": "safetensors",
"save_precision": "fp16",
"save_state": false,
"scale_v_pred_loss_like_noise_pred": false,
"scale_weight_norms": 0,
"sdxl": true,
"sdxl_cache_text_encoder_outputs": false,
"sdxl_no_half_vae": true,
"seed": "",
"shuffle_caption": false,
"stop_text_encoder_training_pct": 0,
"text_encoder_lr": 4e-05,
"train_batch_size": 1,
"train_data_dir": "F:\\Projects\\Kohya\\dataset\\GalGadotLora\\image",
"train_on_input": true,
"training_comment": "",
"unet_lr": 0.0004,
"unit": 1,
"up_lr_weight": "",
"use_cp": false,
"use_wandb": false,
"v2": false,
"v_parameterization": false,
"v_pred_like_loss": 0,
"vae_batch_size": 0,
"wandb_api_key": "",
"weighted_captions": false,
"xformers": "xformers"
}
@parar20 thanks!