CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    Please Donate Buzz for FLUX Lora Training !

    FLUX v.3 (2000-2004) :

    Trained on FLUX.1 [dev] with 85 photos of Natalie Portman in 2000-2004 with detailed GPT-4o captions and square 1024px ratios. Tested on FLUX 1.dev (full) and FLUX fp8 and FLUX nf4 ! Use around strength 0.9-1.2. Distilled CFG 3.5 and CFG 1.0 (without negative prompt). Clipskip 1. Can be used for example as follows:

    Positive : {Artstyle, Character and scene description in usual FLUX fashion},<lora:Natalie_Portman_Squared_FLUX_v3_merger_31_52_61_02_05_03:1.1>

    FLUX v.2 :

    Trained on FLUX.1 [dev] with 84 photos of Natalie Portman with GPT-4 captions. Tested on FLUX 1.dev (full) and FLUX fp8 and FLUX nf4 ! Use around strength 0.8-1.2. Distilled CFG 3.5 and CFG 1.0 (without negative prompt). Clipskip 1. Can be used for example as follows:

    Positive : {Artstyle, Character and scene description in usual FLUX fashion}, <lora:natportman_2004_local_164_v2-000005:1.0>

    FLUX v.1 (Portman in 2004) :

    Trained on FLUX.1 [dev] with 100 photos of Natalie Portman in 2004 with GPT-4 captions. Tested on FLUX 1.dev (full) and FLUX fp8 and FLUX nf4 ! Use around strength 0.8-1.2. Distilled CFG 3.5 and CFG 1.0 (without negative prompt). Clipskip 1. Can be used for example as follows:

    Positive : {Artstyle, Character and scene description in usual FLUX fashion}, <lora:Natalie_Portman_2004_FLUX_epoch_16:1.1>

    SDXL v6.0 2004:

    Trained on Juggernaut X with 230 photos of Natalie Portman from 2003-2004. Tested on Juggernaut X, Juggernaut v7, RealismEngine 2, RealVisXL3 and AlbedoBase 2.0! Use with keyword : "ntxprtman" . Use around strength 1.0-1.1. CFG 5.0-7.0. Clipskip 1. 10-40 steps. Can be used for example as follows:

    Positive : {Artstyle}, {Character and scene description}, elxolsn, <lora:natportman_2003_juggerX_xl_1_wocap-natxprtmn-000149:1.0>

    Negative : ugly, deformed, airbrushed, photoshop, rendered, (multiple people), child

    SDXL v5.0 Juggernaut X :

    Trained for Juggernaut X with 270 photos of Natalie Portman. Use with Juggernaut X, Juggernaut v7 or RealismEngine 2. Works best with Juggernaut X. Use with keyword : "natxportman". Can be used for example as follows:

    Positive : {Artstyle}, {Character and scene description}, natxportman, <lora:natportman_gpt4_juggernautX_2_wocap-merger_21_65_83_04_02_04-natxportman:1.0>

    Negative : ugly, deformed, airbrushed, photoshop, rendered, (multiple people), child

    SDXL v4.0 80mb :

    Trained on 137 photos of Natalie Portman with an improved Lora model which has a much reduced size while maintaining likeness. Put the keyword : "natxportman" at the beginning of prompts! Use Lora with strength around 1.1.

    SDXL v3.1:

    Trained on a subset of the 200 images dataset from v3.0 but that is better balanced and has better captioning (more refined GPT-4 Vision prompt). Lora strength between 0.8-1.2.

    SDXL v3.0:

    Retrained on 200 images but with an improved Lora training method and model to yield higher flexibility while preserving likeness and GPT-4 Vision captioning. Keep weight at 1.05-1.1.

    SDXL v2.0:

    Lora trained on 270 images of Natalie Portman with SDXL 1.0 base. The lora is also tested on Juggernaut XL 3.0. Most of the sample images were created with Juggernaut other than the "...2004..." generated ones.

    SDXL v1.0:

    LoRa trained on 45 images of Natalie Portman with SDXL 1.0 base, The recommended strength for ComfyUi is 1.1.

    Description

    Slight likeness improvement compared to last version.

    FLUX v.2 :

    Trained on FLUX.1 [dev] with 84 photos of Natalie Portman with GPT-4 captions. Tested on FLUX 1.D (full) and FLUX fp8 and FLUX nf4 ! Use around strength 0.8-1.2. Distilled CFG 3.5 and CFG 1.0 (without negative prompt). Clipskip 1. Can be used for example as follows:

    Positive : {Artstyle, Character and scene description in usual FLUX fashion}, <lora:natportman_2004_local_164_v2-000005:1.0>

    FAQ

    Comments (12)

    sibef50408924Oct 8, 2024
    CivitAI

    what training tool do you use and what are your settings in that tool?

    steffangund
    Author
    Oct 8, 2024

    kohya. Look at the metadata of the lora.

    NeuroPixelOct 9, 2024
    CivitAI

    Absolutely a master at their craft. My favorite Lora and favorite creator! Please keep at it!

    steffangund
    Author
    Oct 9, 2024

    Thanks !

    5177291Oct 17, 2024
    CivitAI

    Amazing!!🤩

    5203648Oct 25, 2024
    CivitAI

    What learning rate did you use, to be able to train with so many images? I assume you used batch training, right?

    steffangund
    Author
    Oct 25, 2024· 1 reaction

    Basically this : ae = "C:/forge_diffusion_4080/webui/models/VAE/flux_vae.safetensors"

    apply_t5_attn_mask = true

    bucket_no_upscale = true

    bucket_reso_steps = 64

    cache_latents = true

    cache_latents_to_disk = true

    cache_text_encoder_outputs = true

    cache_text_encoder_outputs_to_disk = true

    caption_extension = ".txt"

    clip_l = "C:/forge_diffusion_4080/webui/models/text_encoder/clip_l.safetensors"

    clip_skip = 1

    discrete_flow_shift = 3.1582

    dynamo_backend = "no"

    enable_bucket = true

    epoch = 70

    fp8_base = true

    full_bf16 = true

    gradient_accumulation_steps = 1

    gradient_checkpointing = true

    guidance_scale = 1.0

    highvram = true

    huber_c = 0.1

    huber_schedule = "snr"

    logging_dir = "G:/Lora_resources/Lora_FLUX_training_runs/training_run_natportman_2004/log"

    loss_type = "l2"

    lr_scheduler = "constant_with_warmup"

    lr_scheduler_args = []

    lr_scheduler_num_cycles = 1

    lr_scheduler_power = 1

    max_bucket_reso = 2048

    max_data_loader_n_workers = 2

    max_timestep = 1000

    max_train_steps = 5880

    min_bucket_reso = 256

    min_snr_gamma = 7

    mixed_precision = "bf16"

    model_prediction_type = "raw"

    network_alpha = 16

    network_args = [ "train_double_block_indices=all", "train_single_block_indices=all",]

    network_dim = 4

    network_module = "networks.lora_flux"

    network_train_unet_only = true

    noise_offset = 0.05

    noise_offset_type = "Original"

    optimizer_args = [ "relative_step=False", "scale_parameter=False", "warmup_init=False",]

    optimizer_type = "Adafactor"

    output_dir = "G:/Lora_resources/Lora_FLUX_training_runs/training_run_natportman_2004/model"

    output_name = "natportman_2004_local_164_v1"

    persistent_data_loader_workers = 1

    pretrained_model_name_or_path = "C:/forge_diffusion_4080/webui/models/Stable-diffusion/FLUX/flux1-dev-fp8.safetensors"

    prior_loss_weight = 1

    resolution = "896, 896"

    sample_every_n_epochs = 1

    sample_prompts = "G:/Lora_resources/Lora_FLUX_training_runs/training_run_natportman_2004/model\\sample/prompt.txt"

    sample_sampler = "euler"

    save_every_n_epochs = 1

    save_model_as = "safetensors"

    save_precision = "bf16"

    sdpa = true

    t5xxl = "C:/forge_diffusion_4080/webui/models/text_encoder/t5xxl_fp16.safetensors"

    t5xxl_max_token_length = 512

    text_encoder_lr = []

    timestep_sampling = "sigmoid"

    train_batch_size = 1

    train_data_dir = "G:/Lora_resources/Lora_FLUX_training_runs/training_run_natportman_2004/img"

    unet_lr = 0.0004

    wandb_run_name = "natportman_2004_local_164_v1"

    Use this as a .toml file then you can directly run the kohya sd script using this as an input parameter

    5203648Oct 25, 2024

    interesting resolution -> 896x896 = 64 * 14. Only 70 epochs. I'm also surprised by the lora rank you used and the alpha. Very small but it gave you good results. How much VRAM do you have?

    steffangund
    Author
    Oct 25, 2024

    @danielm007 16 gigs

    AIENGINov 5, 2024

    @steffangund how many steps are doing with this script? or rather, how many repeats are setup for your dataset folder structure?

    steffangund
    Author
    Nov 5, 2024· 1 reaction

    @AIENGI 1 repeat

    AIENGINov 6, 2024

    @steffangund thank you. Trying to recreate your quality as from what I've seen it's currently unmatched when it comes to likeness. Truely great work!