CivArchive
    Kohaku-XL Delta - rev1
    NSFW
    Preview 7394558
    Preview 7394556
    Preview 7394551
    Preview 7394550
    Preview 7394549
    Preview 7394547
    Preview 7394554
    Preview 7394553
    Preview 7394567
    Preview 7394562
    Preview 7394568
    Preview 7394571
    Preview 7394570
    Preview 7394577
    Preview 7394578
    Preview 7394579
    Preview 7394624
    Preview 7394625

    Kohaku XL Δelta

    One of the best SDXL anime base model which is trained on customer level hardware.

    join us: https://discord.gg/tPBsKDyRR5

    Introduction

    Kohaku XL Delta, the fourth major iteration in the Kohaku XL series, features a 3.6 million images dataset, LyCORIS fine-tuning[1], trained on comsumer-level hardware, and is fully open-sourced.

    Usage

    Here's a simple format to make using this model a breeze:

    <1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>

    Special tags(quality, rating, and date) actually fall under general tags. But it's a good idea to group all these tags before the general tags.

    While Kohaku XL Delta has mastered few artists' styles with high fidelity. However, users are strongly encouraged to blend multiple artist tags to explore new styles, rather than attempting to replicate the style of any specific artist.

    Tags

    All the danbooru tags with at least 1000 popularity should work.

    All the danbooru tags with at least 100 popularity can possibly work with high emphasis.

    Remember to remove all the underscore in tags. (Underscores in short tags are not be removed, which are very likely part of emoji tags.)

    Special Tags

    - Quality tags: masterpiece, best quality, great quality, good quality, normal quality, low quality, worst quality

    - Rating tags: safe, sensitive, nsfw, explicit

    - Date tags: newest, recent, mid, early, old

    Quality Tags

    Quality tags are assigned based on the percentile rankings of the favorite count (fav_count) within each rating category to avoid bias on nsfw content (Animagine XL v3 have met this problem), organized from high to low as follows: 95th, 85th, 75th, 50th, 25th, and 10th percentiles. This creates seven distinct quality levels separated by six thresholds.

    Rating tags

    General: safe

    Sensitive: sensitive

    Questionable: nsfw

    Explicit: nsfw, explicit

    Note: During training, content tagged as "explicit" is also considered under "nsfw" to ensure a comprehensive understanding.

    Date tags

    Date tags are based on the upload dates of the images, as the metadata does not include the actual creation dates.

    The periods are categorized as follows:

    • 2005~2010: old

    • 2011~2014: early

    • 2015~2017: mid

    • 2018~2020: recent

    • 2021~2024: newest

    Emphasis

    Given the short training period, some tags might not have been learned well. Through experimentation, increasing the "emphasis weight" to between 1.5 and 2.5 can still yield descent results, especially for character or artist tags.

    For sd-webui users, please use version>=1.8.0 and switch the emphasis mode to "No norm" to prevent potential NaN issues.

    Resolution

    This model is trained for resolutions from ARB 1024x1024 with minimum resolution 256 and maximum resolution 4096. This means you can use the standard SDXL resolution. However, opting for a slightly higher resolution than 1024x1024 is recommended. Applying a hires-fix is also suggested for better results.

    For more information, please check out the metadata of the sample images provided.

    How This Model Came to Be

    Dataset

    The dataset for training this model was sourced from HakuBooru, comprising 3.6 million images selected from the danbooru2023 dataset.[2][3]

    A selection process was employed to choose 1 million posts from IDs 0 to 2,999,999, another million from IDs 3,000,000 to 4,999,999, and all posts after ID 5,000,000, totaling 4.1 million posts. After filtering out deleted posts, gold account posts and those without images (which could be GIFs or MP4s), the final dataset comprised 3.6 million images.

    The selection was essentially random, but a fixed seed was utilized to ensure reproducibility.

    Further Process

    • Shuffle tags: The order of general tags was shuffled in each step.

    • Tag dropout: Randomly, 10% of general tags were dropped in each step.

    Training

    The training of Kohaku XL Delta was facilitated by the LyCORIS project and the trainer from kohya-ss/sd-scripts. [1][4]

    Base Model Refinement

    Our investigation indicated that training the "token_embedding" and "position_embedding" within CLIP, or the "positional_embedding" in openCLIP, may not be beneficial for fine-tuning on a small to medium scale, particularly with smaller batch sizes.[5][6]

    Consequently, we reverted to the original token and position embeddings from TE1 and TE2 models. Following this, we combined the restored gamma rev2 and beta7 models through a weighted sum (weight=0.5), forming the foundational model for Kohaku XL Delta.

    This foundational model, referred to as "delta-pre2" or "delta base," serves as a preliminary version without further training, positioning its capabilities between Kohaku XL gamma rev2 and Kohaku XL beta7.

    Algorithm: LoKr[7]

    The model was trained using the LoKr algorithm with full matrix triggered and a factor of 2~8 for different modules. The aim was to demonstrate the applicability of LoRA/LyCORIS in training base models.

    The original LoKr file size is under 800MB, and the TE was not frozen. The original LoKr file also be provided as "delta-lokr" version.

    For detailed settings, refer to the LyCORIS config file.

    Other Training Details

    - Hardware: Dual RTX 3090s

    - Num Train Images: 3,665,398

    - Batch Size: 4

    - Grad Accumulation Step: 16

    - Equivalent Batch Size: 128

    - Total Epoch: 1

    - Total Steps: 28638

    - Optimizer: Lion8bit

    - Learning Rate: 4e-5 for UNet / 1e-5 for TE

    - LR Scheduler: Constant

    - Warmup Steps: 100

    - Weight Decay: 0.1

    - Betas: 0.9, 0.95

    - Min SNR Gamma: 5

    - Resolution: 1024x1024

    - Min Bucket Resolution: 256

    - Max Bucket Resolution: 4096

    - Mixed Precision: FP16

    Warning: Versions 0.36.0~0.41.0 of bitsandbytes have significant bugs in the 8bit optimizer that could compromise training, so updating is essential.[8]

    Training Cost

    Utilizing DDP with two RTX 3090s, completing 1 epoch across the 3.6 million image dataset took approximately 17 to 18 days. Each step for an equivalent batch size of 128 took about 51 to 51.5 seconds to complete.

    Final Merge

    The final model is made by base model merged with trained lokr with 1.0 weight. So this model is totally trained, not even merged with Anxl3/PonyXL6 after training.

    What's Next

    Delta is likely the last big update for Kohaku XL, but that doesn't mean I'm done tinkering with it. And I won't ensure this is actually the last one.

    I'm thinking about running it through a few more epochs or maybe beefing up the dataset to 5 million images soon. Plus, I'm considering trying out DoKr with a bit of a bigger setup for some experimental tweaks.

    (Funny thing, Delta started off as an experiment too, but turned out so well it became a main release!)

    Special Thanks

    AngelBottomless & Nyanko7: danbooru2023 dataset[3]

    Kohya-ss: Trainer[4]

    ChatGPT/GPT4: Refine this model card


    AI art should be looked like AI, not like humans.


    Reference & Resource

    Reference

    [1] Shih-Ying Yeh, Yu-Guan Hsieh, Zhidong Gao, Bernard B W Yang, Giyeong Oh, & Yanmin Gong (2024). Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation. In The Twelfth International Conference on Learning Representations.

    [2] HakuBooru - text-image dataset maker for booru style image platform. https://github.com/KohakuBlueleaf/HakuBooru

    [3] Danbooru2023: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset.

    https://huggingface.co/datasets/nyanko7/danbooru2023

    [4] kohya-ss/sd-scripts.

    https://github.com/kohya-ss/sd-scripts

    [5] Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    https://github.com/huggingface/transformers/blob/b647acdb53d251cec126b79e505bac11821d7c93/src/transformers/models/clip/modeling_clip.py#L204-L205

    [6] OpenCLIP - An open source implementation of CLIP.

    https://github.com/mlfoundations/open_clip/blob/73fa7f03a33da53653f61841eb6d69aef161e521/src/open_clip/transformer.py#L598-L604

    [7] LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.

    https://github.com/KohakuBlueleaf/LyCORIS/blob/main/docs/Algo-Details.md#lokr

    [8] TimDettmers/bitsandbytes - issue 659/152/227/262 - Wrong indented lines cause bugs for a long time.

    https://github.com/TimDettmers/bitsandbytes/issues/659

    Resource

    License

    This model is released under Fair-AI-Public-License-1.0-SD
    Plz check this website for more information:
    Freedom of Development (freedevproject.org)

    Appendix

    For more information about the details or configs. Download the attched file or check my huggingface:

    https://huggingface.co/KBlueLeaf/Kohaku-XL-Delta

    Description

    1epoch on 3.6M images with LoKr

    FAQ

    Comments (30)

    TheFoolAIMar 3, 2024
    CivitAI

    coooooool!!!

    superloongv587Mar 3, 2024
    CivitAI

    我的神!!

    GogetaSSGSS3Mar 3, 2024· 7 reactions
    CivitAI

    I wouldn't call this "the best Anime SDXL model" when PonyDiffusionXL exists, HOWEVER, it is still very impressive what you were able to do with this model as an individual. I hope you continue to make more of these models and improve as much as you can!

    kblueleaf
    Author
    Mar 3, 2024· 1 reaction

    Read it care fully
    "The best" of " SDXL anime model 'trained by an individual'. "

    kblueleaf
    Author
    Mar 3, 2024· 1 reaction

    And yes, I will improve that. With more data and more epoch after my extra 2 3090s installed

    GogetaSSGSS3Mar 3, 2024· 1 reaction

    @kblueleaf Hmm I see, I thought PonyXL was made by 1 person as well but I'm probably wrong, but regardless like I said don't stop making these, we appreciate your work :)

    kblueleaf
    Author
    Mar 4, 2024

    Based on your suggestion

    I change the wording

    (I can ensure most 90% of ppl who are training base model are using A100/H100 or at least A40)

    KorewaaiMar 3, 2024· 5 reactions
    CivitAI

    I appreciate your thorough description of details

    njarhzro934Mar 3, 2024· 5 reactions
    CivitAI

    两块3090?这真的非常酷,比你以前的模型表现好很多,让破产玩家看到希望

    bionagatoMar 4, 2024
    CivitAI

    It seems that the files "artists-kxl-delta.json" and "characters-kxl-delta.json" are identical (both being artists-kxl-delta.json). Could you please re-upload the "characters-kxl-delta.json" file? Thank you, this is indeed a great model!

    kblueleaf
    Author
    Mar 4, 2024

    Yes

    I'm exporting correct list

    kblueleaf
    Author
    Mar 4, 2024

    Fixed

    bionagatoMar 4, 2024

    Thanks!

    agress74Mar 4, 2024
    CivitAI

    it's a pity. Almost a masterpiece, but alas not quite. I think it deserves 4 stars

    My comment does not mean that this checkpoint is bad, half of the checkpoints fails my tests, at least this one tried, not passes but its not only noise, black square, or something really bad. It's good checkpoint for easy prompts

    Rating_AgentMar 4, 2024· 5 reactions
    CivitAI

    Guys, mhat model better for nsfw this one or pony?

    kblueleaf
    Author
    Mar 5, 2024

    I guess pony

    obuyb404Mar 5, 2024· 2 reactions

    Isn't it obvious? Pony is the best model for NSFW at the moment, maybe there will be better models later, but at the moment it is the best option.

    TomLucidorOct 18, 2024

    Pick and mix, bro

    jiayev1Mar 5, 2024
    CivitAI

    Best personal-trained anime model so far.

    einar_rainhartMar 5, 2024
    CivitAI

    As I use neither artist styles nor series (I use SDXL to create my own characters), this model is very difficult to use. So far I've been struggling to get an output up to my standards (which are undoubtedly too high).

    I'm not blaming the model per se, but perhaps it requires far more effort than what I've put into it for now.

    kblueleaf
    Author
    Mar 6, 2024

    I have provided the artists/characters list json in training data. You can check it (if the character/artist you want to use is in it and have enough amount)

    for artists have over 500 images, the style should be ok. 100~500 may need higher emphasis weight (you can refer to the model card for more detail)

    for character who have over 500 image in training set can be reproduced directly. 100~500 will need you to add the tags for the features of that character.

    The reason why this happend also marked in the model card.

    einar_rainhartMar 6, 2024

    @kblueleaf Whoops, sorry. I guess I didn't write what I meant correctly. The characters are my own, so they don't exist anywhere else (save my mind...). So far, for all other models, I relied on the "default" style and added tweaks to change it here and there, but without explicitly referring to any artist's style.
    Tags work, the problem is that as far as I understand (is it correct?) for best results one should mix and match artist styles (or use an artist style directly).
    Did I get things right, or is my understanding incorrect? Then, it is highly likely I am using this model incorrectly.
    I guess I'm having issues because there are no artists with a high post count to make a distinctive "anime coloring" style (most of those on Danbooru have between 50 and 150 posts).

    kblueleaf
    Author
    Mar 6, 2024

    @einar_rainhart 50~150 post could be enough

    Try to combine multiple artists tag (I always encourage user to combine different style tag to mix new stylr) and give them higher emphasis weight.

    Maybe you can finally find something you like.

    But for default style. I also don't like it XD

    EagleshadowMar 11, 2024· 2 reactions
    CivitAI

    I'm finding Kohaku Alpha and Beta feel like completely unrelated models to Gamma and Delta.

    Alpha and Beta are coherent and can be used for anything, while having beautiful fine-tuned aesthetic and sense of creativity, and work on pretty much anything.

    Gamma and Delta are extremely anime focused, and seem as they can only be used with booru style prompting. Their coherency seems drastically reduced, and they don't seem to be useful for anything other than generating booru style anime images.

    For example, giving "anime girl riding an elephant in Antarctica" produces a great looking coherent and usable picture with Alpha and Beta, a much worse image with Gamma, and an ugly incoherent mess with Delta.

    kblueleaf
    Author
    Mar 11, 2024· 1 reaction

    Just FYI

    I'm extremely anime-focused

    For my self

    Delta>beta>alpha>>>>>>>gamma

    Gamma is garbage, no one can deny it

    welchyangMar 14, 2024
    CivitAI

    根据我之前用kohaku的经验:使用alpha做为底模训练的lora并在beta上使用它,效果比在beta上训练并直接在beta使用的lora效果要好。delta是否有类似情况,有可以做为delta的“原始模型”来当训练的底模的模吗

    kblueleaf
    Author
    Mar 14, 2024· 5 reactions

    沒有 請直接使用Delta進行訓練。你會發現他非常易於訓練且效果很好。
    可以參考模型gallery,很多都是使用者基於delta自行訓練的lora/lycoris模型產出的圖。

    alicematsuriMar 28, 2024· 3 reactions
    CivitAI

    你好大佬,我使用了你示例样本中的lyco 28000模型,我将它当lora使用,但是生成了好几张乱码图片,而不启用lora时一切正常,我是用webui运行的sd,请问我做错了什么吗?

    kblueleaf
    Author
    Mar 31, 2024

    單獨用訓練出來的diff weight的時候你應該用訊練底膜當底(base)

    shiokaze7Jun 9, 2024· 1 reaction
    CivitAI

    你好,我使用这个模型生成的图片总是雾蒙蒙的,是vae的问题么?我用的是sdxl_vae这个vae,是不是需要换成其他vae?

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    9,614
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/3/2024
    Updated
    5/7/2026
    Deleted
    -

    Files

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.