Modified NoobAI-XL for LoRA training.
It reduces unintended style changes.
This is not suitable for image generation.
Recommended training parameters for character/concept(sd-scripts)
Batch size: 2
Resolution: 1024
Enable aspect ratio bucketing: Yes
Min/Max bucket reso: 512/2048
Dim(Rank): 4-16
Alpha: dim * 0.25
Optimizer: RAdamScheduleFree(LR=0.0004) or CAME(LR=8E-05)
Steps: 1000-1400
Train Conv2d: No
FP8 base: Yes(for <12GB VRAM)
DO NOT USE:
--noise_offset, --zero_terminal_snr
REQUIRED(v-prediction only):
--v_parameterization
NOTICE
Civitai's on-site trainer does not support v-prediction.
LoRA学習用のNoobAI-XLです。
意図しない画風の変化を軽減し、精度が向上します。
これでLoRAを作る場合、「--noise_offset」と「--zero_terminal_snr」は使用しないように。
v-prediction版でLoRAを作る場合、「--v_parameterization」を設定してください。
また、noise offsetの効果を打ち消したため、マージ用途の使用は推奨しません。
なお、NoobAI-XLの成分が薄いマージモデルではLoRAの効果が低下します。
copycatやparuparuなどのNoobAI-XLの成分が薄いchekpointで使用する予定なら、AnyIllustriousで学習するほうが良いと思います。
原理は不明ですが、画風を平均値に近づけ、多様性を高めることでデータセットの画風を吸収してしまうものと思われます。精度についてはZero Terminal SNR無効化により改善を確認しています。
Training information:
Finetuned NoobAI-XL by repeating the training and merging a DoRA 2 times with sd-scripts.
sd-scriptsで二回DoRAの学習とマージを繰り返すことでファインチューンされました。
Dataset information:
Dataset size: 5120 images from Gelbooru
2024年9月4日にGelbooruから新しい順に収集した5120枚の画像で学習しました。
なお、以下のタグがあるか条件を満たす画像は除外しました。
filetype:gif, score:<0, mpixels:<1048576, tagcount:<16, \*_artifacts, adversarial_noise, greyscale, monochrome, digimon, photophop_(meidum), ai-generated, duplicate, bad_\*, off-topic, cropped, resized, reversed, rotated, third-party_edit, screenshot, tagme, real_life, watermark, 3d, koikatsu_(medium), mikumikudance, twitter_username
Training script(source code)
Notice/おしらせ:
This model is licensed under Fair AI Public License 1.0-SD( https://freedevproject.org/faipl-1.0-sd/ ).
If you make modify this model, you must share both your changes and the original license.
このモデルはFair AI Public License 1.0-SD( https://freedevproject.org/faipl-1.0-sd/ )でライセンスされています。
このモデルを改変する場合、変更点および元のライセンスを明記する必要があります。
Description
Base model: NoobAI-XL-v1.1(Epsilon)
FAQ
Comments (6)
Thank you for your work! ❤️ Maybe some guide how to train lora? I tried like this: I start from 12 repeats per image and incrase it to 256 but keyword not work for https://civitai.com/models/833294?modelVersionId=1190596
I have 20 image in dataset.
Will be cool if you share your params. Have a nice day.
accelerate launch --num_cpu_threads_per_process 1 sdxl_train_network.py ^ --pretrained_model_name_or_path "g:\stable-diffusion-webui\models\Stable-diffusion\01 NoobAI\noobaiXLNAIXL_vPred10Version.safetensors" ^ --dataset_config "g:\datasets\naya\dataset.toml" ^ --output_dir "g:\datasets\output\LoRA\naya" ^ --output_name "naya_lora" ^ --save_model_as "safetensors" ^ --network_dim 16 ^ --network_alpha 4 ^ --optimizer_type "AdamW" ^ --network_module "networks.lora" ^ --xformers ^ --mixed_precision "bf16" ^ --cache_latents ^ --gradient_checkpointing ^ --v_parameterization ^ --save_every_n_epochs 1 ^ --save_state
dataset.toml
[general] enable_bucket = true # Enables Aspect Ratio Bucketing (Recommended) [[datasets]] resolution = 1024 # SDXL default resolution batch_size = 1 # Lower batch size to fit 12GB VRAM min_bucket_reso = 512 # Reduce bucket resolution to save VRAM max_bucket_reso = 1024 # Keep max at 1024 for SDXL [[datasets.subsets]] image_dir = "g:/datasets/naya/images" # Path to dataset images caption_extension = ".txt" # Ensure captions are in .txt format num_repeats = 256 # Increase repeats for better character learning shuffle_caption = true # Helps with generalization keep_tokens = 1 # Keeps first 2 tokens unchanged (e.g., "character name, outfit") flip_aug = true # Enable horizontal flipping for diversity caption_dropout_rate = 0.1 # 10% dropout to avoid overfitting on captions
I'm confused, civitai listed NoobAI-XL as SDXL base model and your model listed as illustrious, I tried to train with NoobAI-XL on civitai, the lora I got did not work at all, does that mean the NoobAI-XL on site was not designed for training?
Was it vPred or epsilonPred? I don't think the on-site training is compatible with the noise settings you need to use for training NoobAI-vPred (whether you use AnyNoob or not).
@PhoenixMercurous could you tell me where I can learn the proper noise settings / other training setting instructions for noobAI, many thanks!
@huggy This model's description gives the essentials, I think. I'm no expert though.
I'm quite puzzled about why a fine-tuned model based on NoobAI could be more suitable for LoRA fine-tuning than NoobAI itself. Is there any theoretical basis for doing so?
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

